00:00:00.000 Started by upstream project "autotest-per-patch" build number 126183 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.035 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.036 The recommended git tool is: git 00:00:00.036 using credential 00000000-0000-0000-0000-000000000002 00:00:00.038 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.050 Fetching changes from the remote Git repository 00:00:00.052 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.071 Using shallow fetch with depth 1 00:00:00.071 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.071 > git --version # timeout=10 00:00:00.100 > git --version # 'git version 2.39.2' 00:00:00.100 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.128 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.128 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.343 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.356 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.369 Checking out Revision 1e4055c0ee28da4fa0007a72f92a6499a45bf65d (FETCH_HEAD) 00:00:02.369 > git config core.sparsecheckout # timeout=10 00:00:02.380 > git read-tree -mu HEAD # timeout=10 00:00:02.396 > git checkout -f 1e4055c0ee28da4fa0007a72f92a6499a45bf65d # timeout=5 00:00:02.415 Commit message: "packer: Drop centos7" 00:00:02.415 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:02.504 [Pipeline] Start of Pipeline 00:00:02.520 [Pipeline] library 00:00:02.521 Loading library shm_lib@master 00:00:02.522 Library shm_lib@master is cached. Copying from home. 00:00:02.541 [Pipeline] node 00:00:02.548 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:02.553 [Pipeline] { 00:00:02.565 [Pipeline] catchError 00:00:02.566 [Pipeline] { 00:00:02.581 [Pipeline] wrap 00:00:02.592 [Pipeline] { 00:00:02.600 [Pipeline] stage 00:00:02.602 [Pipeline] { (Prologue) 00:00:02.814 [Pipeline] sh 00:00:03.103 + logger -p user.info -t JENKINS-CI 00:00:03.127 [Pipeline] echo 00:00:03.129 Node: CYP9 00:00:03.135 [Pipeline] sh 00:00:03.435 [Pipeline] setCustomBuildProperty 00:00:03.447 [Pipeline] echo 00:00:03.449 Cleanup processes 00:00:03.456 [Pipeline] sh 00:00:03.764 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.764 745731 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.778 [Pipeline] sh 00:00:04.070 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.070 ++ grep -v 'sudo pgrep' 00:00:04.070 ++ awk '{print $1}' 00:00:04.070 + sudo kill -9 00:00:04.070 + true 00:00:04.086 [Pipeline] cleanWs 00:00:04.097 [WS-CLEANUP] Deleting project workspace... 00:00:04.097 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.105 [WS-CLEANUP] done 00:00:04.110 [Pipeline] setCustomBuildProperty 00:00:04.125 [Pipeline] sh 00:00:04.407 + sudo git config --global --replace-all safe.directory '*' 00:00:04.500 [Pipeline] httpRequest 00:00:04.517 [Pipeline] echo 00:00:04.519 Sorcerer 10.211.164.101 is alive 00:00:04.527 [Pipeline] httpRequest 00:00:04.532 HttpMethod: GET 00:00:04.532 URL: http://10.211.164.101/packages/jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:04.533 Sending request to url: http://10.211.164.101/packages/jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:04.535 Response Code: HTTP/1.1 200 OK 00:00:04.536 Success: Status code 200 is in the accepted range: 200,404 00:00:04.536 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:04.679 [Pipeline] sh 00:00:04.962 + tar --no-same-owner -xf jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:04.978 [Pipeline] httpRequest 00:00:04.995 [Pipeline] echo 00:00:04.997 Sorcerer 10.211.164.101 is alive 00:00:05.004 [Pipeline] httpRequest 00:00:05.009 HttpMethod: GET 00:00:05.009 URL: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:05.010 Sending request to url: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:05.012 Response Code: HTTP/1.1 200 OK 00:00:05.013 Success: Status code 200 is in the accepted range: 200,404 00:00:05.013 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:57.151 [Pipeline] sh 00:00:57.434 + tar --no-same-owner -xf spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:01:00.778 [Pipeline] sh 00:01:01.074 + git -C spdk log --oneline -n5 00:01:01.074 2728651ee accel: adjust task per ch define name 00:01:01.074 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:01:01.074 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:01:01.074 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:01:01.074 719d03c6a sock/uring: only register net impl if supported 00:01:01.090 [Pipeline] } 00:01:01.109 [Pipeline] // stage 00:01:01.121 [Pipeline] stage 00:01:01.124 [Pipeline] { (Prepare) 00:01:01.148 [Pipeline] writeFile 00:01:01.169 [Pipeline] sh 00:01:01.456 + logger -p user.info -t JENKINS-CI 00:01:01.471 [Pipeline] sh 00:01:01.760 + logger -p user.info -t JENKINS-CI 00:01:01.776 [Pipeline] sh 00:01:02.065 + cat autorun-spdk.conf 00:01:02.065 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.065 SPDK_TEST_NVMF=1 00:01:02.065 SPDK_TEST_NVME_CLI=1 00:01:02.065 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.065 SPDK_TEST_NVMF_NICS=e810 00:01:02.065 SPDK_TEST_VFIOUSER=1 00:01:02.065 SPDK_RUN_UBSAN=1 00:01:02.065 NET_TYPE=phy 00:01:02.073 RUN_NIGHTLY=0 00:01:02.078 [Pipeline] readFile 00:01:02.109 [Pipeline] withEnv 00:01:02.111 [Pipeline] { 00:01:02.124 [Pipeline] sh 00:01:02.410 + set -ex 00:01:02.410 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:02.410 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:02.410 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.410 ++ SPDK_TEST_NVMF=1 00:01:02.410 ++ SPDK_TEST_NVME_CLI=1 00:01:02.410 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.410 ++ SPDK_TEST_NVMF_NICS=e810 00:01:02.410 ++ SPDK_TEST_VFIOUSER=1 00:01:02.410 ++ SPDK_RUN_UBSAN=1 00:01:02.410 ++ NET_TYPE=phy 00:01:02.410 ++ RUN_NIGHTLY=0 00:01:02.410 + case $SPDK_TEST_NVMF_NICS in 00:01:02.410 + DRIVERS=ice 00:01:02.410 + [[ tcp == \r\d\m\a ]] 00:01:02.410 + [[ -n ice ]] 00:01:02.410 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:02.410 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:09.033 rmmod: ERROR: Module irdma is not currently loaded 00:01:09.033 rmmod: ERROR: Module i40iw is not currently loaded 00:01:09.033 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:09.033 + true 00:01:09.033 + for D in $DRIVERS 00:01:09.033 + sudo modprobe ice 00:01:09.033 + exit 0 00:01:09.043 [Pipeline] } 00:01:09.062 [Pipeline] // withEnv 00:01:09.067 [Pipeline] } 00:01:09.083 [Pipeline] // stage 00:01:09.092 [Pipeline] catchError 00:01:09.093 [Pipeline] { 00:01:09.107 [Pipeline] timeout 00:01:09.107 Timeout set to expire in 50 min 00:01:09.109 [Pipeline] { 00:01:09.123 [Pipeline] stage 00:01:09.125 [Pipeline] { (Tests) 00:01:09.142 [Pipeline] sh 00:01:09.428 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.428 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.428 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.428 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:09.428 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:09.428 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:09.428 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:09.428 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:09.428 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:09.428 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:09.428 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:09.428 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.428 + source /etc/os-release 00:01:09.428 ++ NAME='Fedora Linux' 00:01:09.428 ++ VERSION='38 (Cloud Edition)' 00:01:09.428 ++ ID=fedora 00:01:09.428 ++ VERSION_ID=38 00:01:09.428 ++ VERSION_CODENAME= 00:01:09.428 ++ PLATFORM_ID=platform:f38 00:01:09.428 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:09.429 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:09.429 ++ LOGO=fedora-logo-icon 00:01:09.429 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:09.429 ++ HOME_URL=https://fedoraproject.org/ 00:01:09.429 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:09.429 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:09.429 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:09.429 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:09.429 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:09.429 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:09.429 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:09.429 ++ SUPPORT_END=2024-05-14 00:01:09.429 ++ VARIANT='Cloud Edition' 00:01:09.429 ++ VARIANT_ID=cloud 00:01:09.429 + uname -a 00:01:09.429 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:09.429 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:12.728 Hugepages 00:01:12.728 node hugesize free / total 00:01:12.728 node0 1048576kB 0 / 0 00:01:12.728 node0 2048kB 0 / 0 00:01:12.728 node1 1048576kB 0 / 0 00:01:12.728 node1 2048kB 0 / 0 00:01:12.728 00:01:12.728 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:12.728 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:12.728 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:12.728 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:12.728 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:12.728 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:12.728 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:12.728 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:12.728 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:12.728 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:12.728 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:12.728 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:12.728 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:12.728 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:12.728 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:12.728 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:12.728 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:12.728 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:12.728 + rm -f /tmp/spdk-ld-path 00:01:12.728 + source autorun-spdk.conf 00:01:12.728 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.728 ++ SPDK_TEST_NVMF=1 00:01:12.728 ++ SPDK_TEST_NVME_CLI=1 00:01:12.728 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.728 ++ SPDK_TEST_NVMF_NICS=e810 00:01:12.728 ++ SPDK_TEST_VFIOUSER=1 00:01:12.728 ++ SPDK_RUN_UBSAN=1 00:01:12.728 ++ NET_TYPE=phy 00:01:12.728 ++ RUN_NIGHTLY=0 00:01:12.728 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:12.728 + [[ -n '' ]] 00:01:12.728 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.728 + for M in /var/spdk/build-*-manifest.txt 00:01:12.728 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:12.728 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:12.728 + for M in /var/spdk/build-*-manifest.txt 00:01:12.728 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:12.728 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:12.728 ++ uname 00:01:12.728 + [[ Linux == \L\i\n\u\x ]] 00:01:12.728 + sudo dmesg -T 00:01:12.728 + sudo dmesg --clear 00:01:12.728 + dmesg_pid=746715 00:01:12.728 + [[ Fedora Linux == FreeBSD ]] 00:01:12.728 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:12.728 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:12.728 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:12.728 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:12.728 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:12.728 + [[ -x /usr/src/fio-static/fio ]] 00:01:12.728 + export FIO_BIN=/usr/src/fio-static/fio 00:01:12.728 + FIO_BIN=/usr/src/fio-static/fio 00:01:12.728 + sudo dmesg -Tw 00:01:12.728 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:12.728 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:12.728 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:12.728 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:12.728 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:12.728 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:12.728 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:12.728 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:12.728 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:12.728 Test configuration: 00:01:12.728 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.728 SPDK_TEST_NVMF=1 00:01:12.728 SPDK_TEST_NVME_CLI=1 00:01:12.728 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.728 SPDK_TEST_NVMF_NICS=e810 00:01:12.728 SPDK_TEST_VFIOUSER=1 00:01:12.728 SPDK_RUN_UBSAN=1 00:01:12.728 NET_TYPE=phy 00:01:12.728 RUN_NIGHTLY=0 13:31:39 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:12.728 13:31:39 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:12.728 13:31:39 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:12.728 13:31:39 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:12.728 13:31:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.728 13:31:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.728 13:31:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.728 13:31:39 -- paths/export.sh@5 -- $ export PATH 00:01:12.728 13:31:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.728 13:31:39 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:12.728 13:31:39 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:12.728 13:31:39 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721043099.XXXXXX 00:01:12.728 13:31:39 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721043099.Vc3NFQ 00:01:12.728 13:31:39 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:12.728 13:31:39 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:12.728 13:31:39 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:12.728 13:31:39 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:12.728 13:31:39 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:12.728 13:31:39 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:12.728 13:31:39 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:12.728 13:31:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.728 13:31:39 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:12.728 13:31:39 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:12.728 13:31:39 -- pm/common@17 -- $ local monitor 00:01:12.728 13:31:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.728 13:31:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.728 13:31:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.728 13:31:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.728 13:31:39 -- pm/common@21 -- $ date +%s 00:01:12.728 13:31:39 -- pm/common@21 -- $ date +%s 00:01:12.728 13:31:39 -- pm/common@25 -- $ sleep 1 00:01:12.728 13:31:39 -- pm/common@21 -- $ date +%s 00:01:12.728 13:31:39 -- pm/common@21 -- $ date +%s 00:01:12.729 13:31:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721043099 00:01:12.729 13:31:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721043099 00:01:12.729 13:31:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721043099 00:01:12.729 13:31:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721043099 00:01:12.729 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721043099_collect-vmstat.pm.log 00:01:12.729 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721043099_collect-cpu-load.pm.log 00:01:12.729 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721043099_collect-cpu-temp.pm.log 00:01:12.729 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721043099_collect-bmc-pm.bmc.pm.log 00:01:13.674 13:31:40 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:13.674 13:31:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:13.674 13:31:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:13.674 13:31:40 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.674 13:31:40 -- spdk/autobuild.sh@16 -- $ date -u 00:01:13.674 Mon Jul 15 11:31:40 AM UTC 2024 00:01:13.674 13:31:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:13.674 v24.09-pre-206-g2728651ee 00:01:13.674 13:31:40 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:13.674 13:31:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:13.674 13:31:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:13.674 13:31:40 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:13.674 13:31:40 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:13.674 13:31:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.674 ************************************ 00:01:13.674 START TEST ubsan 00:01:13.674 ************************************ 00:01:13.674 13:31:40 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:13.674 using ubsan 00:01:13.674 00:01:13.674 real 0m0.000s 00:01:13.674 user 0m0.000s 00:01:13.674 sys 0m0.000s 00:01:13.674 13:31:40 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:13.674 13:31:40 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:13.674 ************************************ 00:01:13.674 END TEST ubsan 00:01:13.674 ************************************ 00:01:13.674 13:31:40 -- common/autotest_common.sh@1142 -- $ return 0 00:01:13.674 13:31:40 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:13.674 13:31:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:13.674 13:31:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:13.674 13:31:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:13.674 13:31:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:13.674 13:31:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:13.674 13:31:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:13.674 13:31:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:13.674 13:31:40 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:13.935 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:13.935 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:14.196 Using 'verbs' RDMA provider 00:01:30.054 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:42.294 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:42.294 Creating mk/config.mk...done. 00:01:42.294 Creating mk/cc.flags.mk...done. 00:01:42.294 Type 'make' to build. 00:01:42.294 13:32:08 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:42.294 13:32:08 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:42.294 13:32:08 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:42.294 13:32:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.294 ************************************ 00:01:42.294 START TEST make 00:01:42.294 ************************************ 00:01:42.294 13:32:08 make -- common/autotest_common.sh@1123 -- $ make -j144 00:01:42.555 make[1]: Nothing to be done for 'all'. 00:01:43.577 The Meson build system 00:01:43.577 Version: 1.3.1 00:01:43.577 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:43.577 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.577 Build type: native build 00:01:43.577 Project name: libvfio-user 00:01:43.577 Project version: 0.0.1 00:01:43.577 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:43.577 C linker for the host machine: cc ld.bfd 2.39-16 00:01:43.577 Host machine cpu family: x86_64 00:01:43.577 Host machine cpu: x86_64 00:01:43.577 Run-time dependency threads found: YES 00:01:43.577 Library dl found: YES 00:01:43.577 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:43.577 Run-time dependency json-c found: YES 0.17 00:01:43.577 Run-time dependency cmocka found: YES 1.1.7 00:01:43.577 Program pytest-3 found: NO 00:01:43.577 Program flake8 found: NO 00:01:43.577 Program misspell-fixer found: NO 00:01:43.577 Program restructuredtext-lint found: NO 00:01:43.577 Program valgrind found: YES (/usr/bin/valgrind) 00:01:43.577 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:43.577 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:43.577 Compiler for C supports arguments -Wwrite-strings: YES 00:01:43.577 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:43.577 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:43.577 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:43.578 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:43.578 Build targets in project: 8 00:01:43.578 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:43.578 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:43.578 00:01:43.578 libvfio-user 0.0.1 00:01:43.578 00:01:43.578 User defined options 00:01:43.578 buildtype : debug 00:01:43.578 default_library: shared 00:01:43.578 libdir : /usr/local/lib 00:01:43.578 00:01:43.578 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.145 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:44.145 [1/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:44.145 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:44.145 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:44.145 [4/37] Compiling C object samples/null.p/null.c.o 00:01:44.145 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:44.145 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:44.145 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:44.145 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:44.145 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:44.145 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:44.145 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:44.145 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:44.145 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:44.145 [14/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:44.145 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:44.145 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:44.145 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:44.145 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:44.145 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:44.145 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:44.145 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:44.145 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:44.145 [23/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:44.145 [24/37] Compiling C object samples/server.p/server.c.o 00:01:44.145 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:44.145 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:44.145 [27/37] Compiling C object samples/client.p/client.c.o 00:01:44.145 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:44.145 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:44.145 [30/37] Linking target samples/client 00:01:44.406 [31/37] Linking target test/unit_tests 00:01:44.406 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:44.406 [33/37] Linking target samples/null 00:01:44.406 [34/37] Linking target samples/server 00:01:44.406 [35/37] Linking target samples/gpio-pci-idio-16 00:01:44.406 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:44.406 [37/37] Linking target samples/lspci 00:01:44.406 INFO: autodetecting backend as ninja 00:01:44.406 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.406 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.667 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:44.667 ninja: no work to do. 00:01:51.252 The Meson build system 00:01:51.252 Version: 1.3.1 00:01:51.252 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:51.252 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:51.252 Build type: native build 00:01:51.252 Program cat found: YES (/usr/bin/cat) 00:01:51.252 Project name: DPDK 00:01:51.252 Project version: 24.03.0 00:01:51.252 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:51.252 C linker for the host machine: cc ld.bfd 2.39-16 00:01:51.252 Host machine cpu family: x86_64 00:01:51.252 Host machine cpu: x86_64 00:01:51.252 Message: ## Building in Developer Mode ## 00:01:51.252 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:51.252 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:51.252 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:51.252 Program python3 found: YES (/usr/bin/python3) 00:01:51.252 Program cat found: YES (/usr/bin/cat) 00:01:51.252 Compiler for C supports arguments -march=native: YES 00:01:51.252 Checking for size of "void *" : 8 00:01:51.252 Checking for size of "void *" : 8 (cached) 00:01:51.252 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:51.252 Library m found: YES 00:01:51.252 Library numa found: YES 00:01:51.252 Has header "numaif.h" : YES 00:01:51.252 Library fdt found: NO 00:01:51.252 Library execinfo found: NO 00:01:51.252 Has header "execinfo.h" : YES 00:01:51.252 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:51.252 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:51.252 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:51.252 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:51.252 Run-time dependency openssl found: YES 3.0.9 00:01:51.252 Run-time dependency libpcap found: YES 1.10.4 00:01:51.252 Has header "pcap.h" with dependency libpcap: YES 00:01:51.252 Compiler for C supports arguments -Wcast-qual: YES 00:01:51.252 Compiler for C supports arguments -Wdeprecated: YES 00:01:51.252 Compiler for C supports arguments -Wformat: YES 00:01:51.252 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:51.252 Compiler for C supports arguments -Wformat-security: NO 00:01:51.252 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:51.252 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:51.252 Compiler for C supports arguments -Wnested-externs: YES 00:01:51.252 Compiler for C supports arguments -Wold-style-definition: YES 00:01:51.252 Compiler for C supports arguments -Wpointer-arith: YES 00:01:51.252 Compiler for C supports arguments -Wsign-compare: YES 00:01:51.252 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:51.252 Compiler for C supports arguments -Wundef: YES 00:01:51.252 Compiler for C supports arguments -Wwrite-strings: YES 00:01:51.252 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:51.252 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:51.252 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:51.252 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:51.252 Program objdump found: YES (/usr/bin/objdump) 00:01:51.252 Compiler for C supports arguments -mavx512f: YES 00:01:51.252 Checking if "AVX512 checking" compiles: YES 00:01:51.252 Fetching value of define "__SSE4_2__" : 1 00:01:51.252 Fetching value of define "__AES__" : 1 00:01:51.252 Fetching value of define "__AVX__" : 1 00:01:51.252 Fetching value of define "__AVX2__" : 1 00:01:51.252 Fetching value of define "__AVX512BW__" : 1 00:01:51.252 Fetching value of define "__AVX512CD__" : 1 00:01:51.252 Fetching value of define "__AVX512DQ__" : 1 00:01:51.252 Fetching value of define "__AVX512F__" : 1 00:01:51.252 Fetching value of define "__AVX512VL__" : 1 00:01:51.252 Fetching value of define "__PCLMUL__" : 1 00:01:51.252 Fetching value of define "__RDRND__" : 1 00:01:51.252 Fetching value of define "__RDSEED__" : 1 00:01:51.252 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:51.252 Fetching value of define "__znver1__" : (undefined) 00:01:51.252 Fetching value of define "__znver2__" : (undefined) 00:01:51.253 Fetching value of define "__znver3__" : (undefined) 00:01:51.253 Fetching value of define "__znver4__" : (undefined) 00:01:51.253 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:51.253 Message: lib/log: Defining dependency "log" 00:01:51.253 Message: lib/kvargs: Defining dependency "kvargs" 00:01:51.253 Message: lib/telemetry: Defining dependency "telemetry" 00:01:51.253 Checking for function "getentropy" : NO 00:01:51.253 Message: lib/eal: Defining dependency "eal" 00:01:51.253 Message: lib/ring: Defining dependency "ring" 00:01:51.253 Message: lib/rcu: Defining dependency "rcu" 00:01:51.253 Message: lib/mempool: Defining dependency "mempool" 00:01:51.253 Message: lib/mbuf: Defining dependency "mbuf" 00:01:51.253 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:51.253 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:51.253 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:51.253 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:51.253 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:51.253 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:51.253 Compiler for C supports arguments -mpclmul: YES 00:01:51.253 Compiler for C supports arguments -maes: YES 00:01:51.253 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:51.253 Compiler for C supports arguments -mavx512bw: YES 00:01:51.253 Compiler for C supports arguments -mavx512dq: YES 00:01:51.253 Compiler for C supports arguments -mavx512vl: YES 00:01:51.253 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:51.253 Compiler for C supports arguments -mavx2: YES 00:01:51.253 Compiler for C supports arguments -mavx: YES 00:01:51.253 Message: lib/net: Defining dependency "net" 00:01:51.253 Message: lib/meter: Defining dependency "meter" 00:01:51.253 Message: lib/ethdev: Defining dependency "ethdev" 00:01:51.253 Message: lib/pci: Defining dependency "pci" 00:01:51.253 Message: lib/cmdline: Defining dependency "cmdline" 00:01:51.253 Message: lib/hash: Defining dependency "hash" 00:01:51.253 Message: lib/timer: Defining dependency "timer" 00:01:51.253 Message: lib/compressdev: Defining dependency "compressdev" 00:01:51.253 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:51.253 Message: lib/dmadev: Defining dependency "dmadev" 00:01:51.253 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:51.253 Message: lib/power: Defining dependency "power" 00:01:51.253 Message: lib/reorder: Defining dependency "reorder" 00:01:51.253 Message: lib/security: Defining dependency "security" 00:01:51.253 Has header "linux/userfaultfd.h" : YES 00:01:51.253 Has header "linux/vduse.h" : YES 00:01:51.253 Message: lib/vhost: Defining dependency "vhost" 00:01:51.253 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:51.253 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:51.253 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:51.253 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:51.253 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:51.253 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:51.253 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:51.253 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:51.253 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:51.253 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:51.253 Program doxygen found: YES (/usr/bin/doxygen) 00:01:51.253 Configuring doxy-api-html.conf using configuration 00:01:51.253 Configuring doxy-api-man.conf using configuration 00:01:51.253 Program mandb found: YES (/usr/bin/mandb) 00:01:51.253 Program sphinx-build found: NO 00:01:51.253 Configuring rte_build_config.h using configuration 00:01:51.253 Message: 00:01:51.253 ================= 00:01:51.253 Applications Enabled 00:01:51.253 ================= 00:01:51.253 00:01:51.253 apps: 00:01:51.253 00:01:51.253 00:01:51.253 Message: 00:01:51.253 ================= 00:01:51.253 Libraries Enabled 00:01:51.253 ================= 00:01:51.253 00:01:51.253 libs: 00:01:51.253 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:51.253 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:51.253 cryptodev, dmadev, power, reorder, security, vhost, 00:01:51.253 00:01:51.253 Message: 00:01:51.253 =============== 00:01:51.253 Drivers Enabled 00:01:51.253 =============== 00:01:51.253 00:01:51.253 common: 00:01:51.253 00:01:51.253 bus: 00:01:51.253 pci, vdev, 00:01:51.253 mempool: 00:01:51.253 ring, 00:01:51.253 dma: 00:01:51.253 00:01:51.253 net: 00:01:51.253 00:01:51.253 crypto: 00:01:51.253 00:01:51.253 compress: 00:01:51.253 00:01:51.253 vdpa: 00:01:51.253 00:01:51.253 00:01:51.253 Message: 00:01:51.253 ================= 00:01:51.253 Content Skipped 00:01:51.253 ================= 00:01:51.253 00:01:51.253 apps: 00:01:51.253 dumpcap: explicitly disabled via build config 00:01:51.253 graph: explicitly disabled via build config 00:01:51.253 pdump: explicitly disabled via build config 00:01:51.253 proc-info: explicitly disabled via build config 00:01:51.253 test-acl: explicitly disabled via build config 00:01:51.253 test-bbdev: explicitly disabled via build config 00:01:51.253 test-cmdline: explicitly disabled via build config 00:01:51.253 test-compress-perf: explicitly disabled via build config 00:01:51.253 test-crypto-perf: explicitly disabled via build config 00:01:51.253 test-dma-perf: explicitly disabled via build config 00:01:51.253 test-eventdev: explicitly disabled via build config 00:01:51.253 test-fib: explicitly disabled via build config 00:01:51.253 test-flow-perf: explicitly disabled via build config 00:01:51.253 test-gpudev: explicitly disabled via build config 00:01:51.253 test-mldev: explicitly disabled via build config 00:01:51.253 test-pipeline: explicitly disabled via build config 00:01:51.253 test-pmd: explicitly disabled via build config 00:01:51.253 test-regex: explicitly disabled via build config 00:01:51.253 test-sad: explicitly disabled via build config 00:01:51.253 test-security-perf: explicitly disabled via build config 00:01:51.253 00:01:51.253 libs: 00:01:51.253 argparse: explicitly disabled via build config 00:01:51.253 metrics: explicitly disabled via build config 00:01:51.253 acl: explicitly disabled via build config 00:01:51.253 bbdev: explicitly disabled via build config 00:01:51.253 bitratestats: explicitly disabled via build config 00:01:51.253 bpf: explicitly disabled via build config 00:01:51.253 cfgfile: explicitly disabled via build config 00:01:51.253 distributor: explicitly disabled via build config 00:01:51.253 efd: explicitly disabled via build config 00:01:51.253 eventdev: explicitly disabled via build config 00:01:51.253 dispatcher: explicitly disabled via build config 00:01:51.253 gpudev: explicitly disabled via build config 00:01:51.253 gro: explicitly disabled via build config 00:01:51.253 gso: explicitly disabled via build config 00:01:51.253 ip_frag: explicitly disabled via build config 00:01:51.253 jobstats: explicitly disabled via build config 00:01:51.253 latencystats: explicitly disabled via build config 00:01:51.253 lpm: explicitly disabled via build config 00:01:51.253 member: explicitly disabled via build config 00:01:51.253 pcapng: explicitly disabled via build config 00:01:51.253 rawdev: explicitly disabled via build config 00:01:51.253 regexdev: explicitly disabled via build config 00:01:51.253 mldev: explicitly disabled via build config 00:01:51.253 rib: explicitly disabled via build config 00:01:51.253 sched: explicitly disabled via build config 00:01:51.253 stack: explicitly disabled via build config 00:01:51.253 ipsec: explicitly disabled via build config 00:01:51.253 pdcp: explicitly disabled via build config 00:01:51.253 fib: explicitly disabled via build config 00:01:51.253 port: explicitly disabled via build config 00:01:51.253 pdump: explicitly disabled via build config 00:01:51.253 table: explicitly disabled via build config 00:01:51.253 pipeline: explicitly disabled via build config 00:01:51.253 graph: explicitly disabled via build config 00:01:51.253 node: explicitly disabled via build config 00:01:51.253 00:01:51.253 drivers: 00:01:51.253 common/cpt: not in enabled drivers build config 00:01:51.253 common/dpaax: not in enabled drivers build config 00:01:51.253 common/iavf: not in enabled drivers build config 00:01:51.253 common/idpf: not in enabled drivers build config 00:01:51.253 common/ionic: not in enabled drivers build config 00:01:51.253 common/mvep: not in enabled drivers build config 00:01:51.253 common/octeontx: not in enabled drivers build config 00:01:51.253 bus/auxiliary: not in enabled drivers build config 00:01:51.253 bus/cdx: not in enabled drivers build config 00:01:51.253 bus/dpaa: not in enabled drivers build config 00:01:51.253 bus/fslmc: not in enabled drivers build config 00:01:51.253 bus/ifpga: not in enabled drivers build config 00:01:51.253 bus/platform: not in enabled drivers build config 00:01:51.253 bus/uacce: not in enabled drivers build config 00:01:51.253 bus/vmbus: not in enabled drivers build config 00:01:51.253 common/cnxk: not in enabled drivers build config 00:01:51.253 common/mlx5: not in enabled drivers build config 00:01:51.253 common/nfp: not in enabled drivers build config 00:01:51.253 common/nitrox: not in enabled drivers build config 00:01:51.253 common/qat: not in enabled drivers build config 00:01:51.253 common/sfc_efx: not in enabled drivers build config 00:01:51.253 mempool/bucket: not in enabled drivers build config 00:01:51.253 mempool/cnxk: not in enabled drivers build config 00:01:51.253 mempool/dpaa: not in enabled drivers build config 00:01:51.253 mempool/dpaa2: not in enabled drivers build config 00:01:51.253 mempool/octeontx: not in enabled drivers build config 00:01:51.253 mempool/stack: not in enabled drivers build config 00:01:51.253 dma/cnxk: not in enabled drivers build config 00:01:51.253 dma/dpaa: not in enabled drivers build config 00:01:51.253 dma/dpaa2: not in enabled drivers build config 00:01:51.253 dma/hisilicon: not in enabled drivers build config 00:01:51.253 dma/idxd: not in enabled drivers build config 00:01:51.253 dma/ioat: not in enabled drivers build config 00:01:51.253 dma/skeleton: not in enabled drivers build config 00:01:51.253 net/af_packet: not in enabled drivers build config 00:01:51.253 net/af_xdp: not in enabled drivers build config 00:01:51.253 net/ark: not in enabled drivers build config 00:01:51.253 net/atlantic: not in enabled drivers build config 00:01:51.253 net/avp: not in enabled drivers build config 00:01:51.253 net/axgbe: not in enabled drivers build config 00:01:51.253 net/bnx2x: not in enabled drivers build config 00:01:51.253 net/bnxt: not in enabled drivers build config 00:01:51.253 net/bonding: not in enabled drivers build config 00:01:51.253 net/cnxk: not in enabled drivers build config 00:01:51.253 net/cpfl: not in enabled drivers build config 00:01:51.253 net/cxgbe: not in enabled drivers build config 00:01:51.253 net/dpaa: not in enabled drivers build config 00:01:51.253 net/dpaa2: not in enabled drivers build config 00:01:51.253 net/e1000: not in enabled drivers build config 00:01:51.253 net/ena: not in enabled drivers build config 00:01:51.253 net/enetc: not in enabled drivers build config 00:01:51.253 net/enetfec: not in enabled drivers build config 00:01:51.253 net/enic: not in enabled drivers build config 00:01:51.253 net/failsafe: not in enabled drivers build config 00:01:51.253 net/fm10k: not in enabled drivers build config 00:01:51.253 net/gve: not in enabled drivers build config 00:01:51.253 net/hinic: not in enabled drivers build config 00:01:51.253 net/hns3: not in enabled drivers build config 00:01:51.253 net/i40e: not in enabled drivers build config 00:01:51.253 net/iavf: not in enabled drivers build config 00:01:51.253 net/ice: not in enabled drivers build config 00:01:51.253 net/idpf: not in enabled drivers build config 00:01:51.253 net/igc: not in enabled drivers build config 00:01:51.253 net/ionic: not in enabled drivers build config 00:01:51.253 net/ipn3ke: not in enabled drivers build config 00:01:51.253 net/ixgbe: not in enabled drivers build config 00:01:51.253 net/mana: not in enabled drivers build config 00:01:51.253 net/memif: not in enabled drivers build config 00:01:51.253 net/mlx4: not in enabled drivers build config 00:01:51.253 net/mlx5: not in enabled drivers build config 00:01:51.253 net/mvneta: not in enabled drivers build config 00:01:51.253 net/mvpp2: not in enabled drivers build config 00:01:51.253 net/netvsc: not in enabled drivers build config 00:01:51.253 net/nfb: not in enabled drivers build config 00:01:51.253 net/nfp: not in enabled drivers build config 00:01:51.253 net/ngbe: not in enabled drivers build config 00:01:51.253 net/null: not in enabled drivers build config 00:01:51.253 net/octeontx: not in enabled drivers build config 00:01:51.253 net/octeon_ep: not in enabled drivers build config 00:01:51.253 net/pcap: not in enabled drivers build config 00:01:51.253 net/pfe: not in enabled drivers build config 00:01:51.253 net/qede: not in enabled drivers build config 00:01:51.253 net/ring: not in enabled drivers build config 00:01:51.253 net/sfc: not in enabled drivers build config 00:01:51.253 net/softnic: not in enabled drivers build config 00:01:51.253 net/tap: not in enabled drivers build config 00:01:51.253 net/thunderx: not in enabled drivers build config 00:01:51.253 net/txgbe: not in enabled drivers build config 00:01:51.253 net/vdev_netvsc: not in enabled drivers build config 00:01:51.253 net/vhost: not in enabled drivers build config 00:01:51.253 net/virtio: not in enabled drivers build config 00:01:51.253 net/vmxnet3: not in enabled drivers build config 00:01:51.253 raw/*: missing internal dependency, "rawdev" 00:01:51.253 crypto/armv8: not in enabled drivers build config 00:01:51.253 crypto/bcmfs: not in enabled drivers build config 00:01:51.253 crypto/caam_jr: not in enabled drivers build config 00:01:51.253 crypto/ccp: not in enabled drivers build config 00:01:51.253 crypto/cnxk: not in enabled drivers build config 00:01:51.253 crypto/dpaa_sec: not in enabled drivers build config 00:01:51.253 crypto/dpaa2_sec: not in enabled drivers build config 00:01:51.253 crypto/ipsec_mb: not in enabled drivers build config 00:01:51.253 crypto/mlx5: not in enabled drivers build config 00:01:51.254 crypto/mvsam: not in enabled drivers build config 00:01:51.254 crypto/nitrox: not in enabled drivers build config 00:01:51.254 crypto/null: not in enabled drivers build config 00:01:51.254 crypto/octeontx: not in enabled drivers build config 00:01:51.254 crypto/openssl: not in enabled drivers build config 00:01:51.254 crypto/scheduler: not in enabled drivers build config 00:01:51.254 crypto/uadk: not in enabled drivers build config 00:01:51.254 crypto/virtio: not in enabled drivers build config 00:01:51.254 compress/isal: not in enabled drivers build config 00:01:51.254 compress/mlx5: not in enabled drivers build config 00:01:51.254 compress/nitrox: not in enabled drivers build config 00:01:51.254 compress/octeontx: not in enabled drivers build config 00:01:51.254 compress/zlib: not in enabled drivers build config 00:01:51.254 regex/*: missing internal dependency, "regexdev" 00:01:51.254 ml/*: missing internal dependency, "mldev" 00:01:51.254 vdpa/ifc: not in enabled drivers build config 00:01:51.254 vdpa/mlx5: not in enabled drivers build config 00:01:51.254 vdpa/nfp: not in enabled drivers build config 00:01:51.254 vdpa/sfc: not in enabled drivers build config 00:01:51.254 event/*: missing internal dependency, "eventdev" 00:01:51.254 baseband/*: missing internal dependency, "bbdev" 00:01:51.254 gpu/*: missing internal dependency, "gpudev" 00:01:51.254 00:01:51.254 00:01:51.254 Build targets in project: 84 00:01:51.254 00:01:51.254 DPDK 24.03.0 00:01:51.254 00:01:51.254 User defined options 00:01:51.254 buildtype : debug 00:01:51.254 default_library : shared 00:01:51.254 libdir : lib 00:01:51.254 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:51.254 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:51.254 c_link_args : 00:01:51.254 cpu_instruction_set: native 00:01:51.254 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:51.254 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:51.254 enable_docs : false 00:01:51.254 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:51.254 enable_kmods : false 00:01:51.254 max_lcores : 128 00:01:51.254 tests : false 00:01:51.254 00:01:51.254 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:51.254 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:51.525 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:51.525 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:51.525 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:51.525 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:51.525 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:51.525 [6/267] Linking static target lib/librte_kvargs.a 00:01:51.525 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:51.525 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:51.525 [9/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:51.525 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:51.525 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:51.525 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:51.525 [13/267] Linking static target lib/librte_log.a 00:01:51.525 [14/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:51.525 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:51.525 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:51.525 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.525 [18/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:51.525 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:51.525 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:51.525 [21/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:51.785 [22/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:51.785 [23/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:51.785 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:51.785 [25/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:51.785 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:51.785 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:51.785 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:51.785 [29/267] Linking static target lib/librte_pci.a 00:01:51.785 [30/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:51.785 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:51.785 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:51.785 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:51.785 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:51.785 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:51.785 [36/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:51.785 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:51.785 [38/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:52.044 [39/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:52.044 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:52.044 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:52.044 [42/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:52.044 [43/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.044 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:52.044 [45/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.044 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:52.044 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:52.044 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:52.044 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:52.044 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:52.044 [51/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:52.044 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:52.044 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:52.044 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:52.044 [55/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:52.044 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:52.044 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:52.044 [58/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:52.044 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:52.044 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:52.044 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:52.044 [62/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:52.044 [63/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:52.044 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:52.044 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:52.044 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:52.044 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:52.044 [68/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:52.044 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:52.044 [70/267] Linking static target lib/librte_telemetry.a 00:01:52.044 [71/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:52.044 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:52.044 [73/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.044 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:52.044 [75/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:52.044 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:52.044 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:52.044 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:52.044 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:52.044 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:52.044 [81/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:52.044 [82/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:52.044 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:52.044 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:52.044 [85/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:52.044 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:52.044 [87/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:52.044 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:52.044 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.044 [90/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:52.044 [91/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:52.044 [92/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:52.044 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:52.044 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:52.044 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:52.044 [96/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:52.044 [97/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:52.044 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.044 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:52.044 [100/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:52.044 [101/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:52.044 [102/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:52.306 [103/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.306 [104/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:52.306 [105/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:52.306 [106/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:52.306 [107/267] Linking static target lib/librte_meter.a 00:01:52.306 [108/267] Linking static target lib/librte_dmadev.a 00:01:52.306 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:52.306 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:52.306 [111/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:52.306 [112/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:52.306 [113/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:52.306 [114/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:52.306 [115/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:52.306 [116/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.306 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.306 [118/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:52.306 [119/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.306 [120/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:52.306 [121/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:52.306 [122/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:52.306 [123/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.306 [124/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:52.306 [125/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:52.306 [126/267] Linking static target lib/librte_ring.a 00:01:52.306 [127/267] Linking static target lib/librte_cmdline.a 00:01:52.306 [128/267] Linking static target lib/librte_mempool.a 00:01:52.306 [129/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:52.306 [130/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.306 [131/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:52.306 [132/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:52.306 [133/267] Linking static target lib/librte_timer.a 00:01:52.306 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:52.306 [135/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.306 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.306 [137/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:52.306 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:52.306 [139/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.306 [140/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:52.306 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.306 [142/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:52.306 [143/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:52.306 [144/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:52.306 [145/267] Linking static target lib/librte_net.a 00:01:52.306 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.306 [147/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:52.306 [148/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:52.306 [149/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:52.306 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.306 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:52.306 [152/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:52.306 [153/267] Linking target lib/librte_log.so.24.1 00:01:52.306 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:52.306 [155/267] Linking static target lib/librte_power.a 00:01:52.306 [156/267] Linking static target lib/librte_compressdev.a 00:01:52.306 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.306 [158/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:52.306 [159/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:52.306 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.306 [161/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:52.306 [162/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:52.306 [163/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:52.306 [164/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:52.306 [165/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:52.306 [166/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.306 [167/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:52.306 [168/267] Linking static target lib/librte_reorder.a 00:01:52.306 [169/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:52.306 [170/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:52.306 [171/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.306 [172/267] Linking static target lib/librte_rcu.a 00:01:52.306 [173/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.306 [174/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:52.306 [175/267] Linking static target lib/librte_eal.a 00:01:52.306 [176/267] Linking static target drivers/librte_bus_vdev.a 00:01:52.306 [177/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:52.306 [178/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:52.306 [179/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:52.306 [180/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:52.306 [181/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:52.306 [182/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:52.306 [183/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:52.306 [184/267] Linking static target lib/librte_security.a 00:01:52.306 [185/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:52.306 [186/267] Linking static target lib/librte_mbuf.a 00:01:52.568 [187/267] Linking target lib/librte_kvargs.so.24.1 00:01:52.568 [188/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.568 [189/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:52.568 [190/267] Linking static target lib/librte_hash.a 00:01:52.568 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:52.568 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:52.568 [193/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:52.568 [194/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.568 [195/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:52.568 [196/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.568 [197/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:52.568 [198/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.568 [199/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.568 [200/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:52.568 [201/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:52.568 [202/267] Linking static target drivers/librte_mempool_ring.a 00:01:52.568 [203/267] Linking static target lib/librte_cryptodev.a 00:01:52.568 [204/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.568 [205/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.568 [206/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.568 [207/267] Linking static target drivers/librte_bus_pci.a 00:01:52.829 [208/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:52.829 [209/267] Linking target lib/librte_telemetry.so.24.1 00:01:52.829 [210/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.829 [211/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.829 [212/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.829 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.829 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:52.829 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.089 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.089 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.089 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:53.089 [219/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.391 [220/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.391 [221/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:53.391 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.391 [223/267] Linking static target lib/librte_ethdev.a 00:01:53.391 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.654 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.654 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.915 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:53.915 [228/267] Linking static target lib/librte_vhost.a 00:01:54.858 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.244 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.832 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.221 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.221 [233/267] Linking target lib/librte_eal.so.24.1 00:02:04.221 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:04.221 [235/267] Linking target lib/librte_pci.so.24.1 00:02:04.221 [236/267] Linking target lib/librte_ring.so.24.1 00:02:04.221 [237/267] Linking target lib/librte_timer.so.24.1 00:02:04.221 [238/267] Linking target lib/librte_meter.so.24.1 00:02:04.221 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:04.221 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:04.482 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:04.482 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:04.482 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:04.482 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:04.482 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:04.482 [246/267] Linking target lib/librte_rcu.so.24.1 00:02:04.482 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:04.482 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:04.482 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:04.482 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:04.742 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:04.742 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:04.742 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:04.742 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:02:04.742 [255/267] Linking target lib/librte_reorder.so.24.1 00:02:04.742 [256/267] Linking target lib/librte_compressdev.so.24.1 00:02:04.742 [257/267] Linking target lib/librte_net.so.24.1 00:02:05.007 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:05.007 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:05.007 [260/267] Linking target lib/librte_cmdline.so.24.1 00:02:05.007 [261/267] Linking target lib/librte_security.so.24.1 00:02:05.007 [262/267] Linking target lib/librte_hash.so.24.1 00:02:05.007 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:05.304 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:05.304 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:05.304 [266/267] Linking target lib/librte_power.so.24.1 00:02:05.304 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:05.304 INFO: autodetecting backend as ninja 00:02:05.304 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:06.252 CC lib/ut/ut.o 00:02:06.252 CC lib/log/log.o 00:02:06.252 CC lib/log/log_flags.o 00:02:06.252 CC lib/log/log_deprecated.o 00:02:06.252 CC lib/ut_mock/mock.o 00:02:06.520 LIB libspdk_ut.a 00:02:06.520 LIB libspdk_ut_mock.a 00:02:06.520 LIB libspdk_log.a 00:02:06.520 SO libspdk_ut.so.2.0 00:02:06.520 SO libspdk_ut_mock.so.6.0 00:02:06.520 SO libspdk_log.so.7.0 00:02:06.520 SYMLINK libspdk_ut.so 00:02:06.781 SYMLINK libspdk_ut_mock.so 00:02:06.781 SYMLINK libspdk_log.so 00:02:07.042 CC lib/dma/dma.o 00:02:07.042 CXX lib/trace_parser/trace.o 00:02:07.042 CC lib/util/base64.o 00:02:07.042 CC lib/util/cpuset.o 00:02:07.042 CC lib/util/bit_array.o 00:02:07.042 CC lib/ioat/ioat.o 00:02:07.042 CC lib/util/crc16.o 00:02:07.042 CC lib/util/crc32.o 00:02:07.042 CC lib/util/crc32c.o 00:02:07.042 CC lib/util/crc32_ieee.o 00:02:07.042 CC lib/util/crc64.o 00:02:07.042 CC lib/util/dif.o 00:02:07.042 CC lib/util/fd.o 00:02:07.042 CC lib/util/file.o 00:02:07.042 CC lib/util/hexlify.o 00:02:07.042 CC lib/util/iov.o 00:02:07.042 CC lib/util/math.o 00:02:07.042 CC lib/util/pipe.o 00:02:07.042 CC lib/util/strerror_tls.o 00:02:07.042 CC lib/util/string.o 00:02:07.042 CC lib/util/uuid.o 00:02:07.042 CC lib/util/fd_group.o 00:02:07.042 CC lib/util/xor.o 00:02:07.042 CC lib/util/zipf.o 00:02:07.303 CC lib/vfio_user/host/vfio_user.o 00:02:07.303 CC lib/vfio_user/host/vfio_user_pci.o 00:02:07.303 LIB libspdk_dma.a 00:02:07.303 SO libspdk_dma.so.4.0 00:02:07.303 LIB libspdk_ioat.a 00:02:07.303 SYMLINK libspdk_dma.so 00:02:07.303 SO libspdk_ioat.so.7.0 00:02:07.564 SYMLINK libspdk_ioat.so 00:02:07.564 LIB libspdk_vfio_user.a 00:02:07.564 SO libspdk_vfio_user.so.5.0 00:02:07.564 LIB libspdk_util.a 00:02:07.564 SYMLINK libspdk_vfio_user.so 00:02:07.564 SO libspdk_util.so.9.1 00:02:07.824 SYMLINK libspdk_util.so 00:02:07.824 LIB libspdk_trace_parser.a 00:02:07.824 SO libspdk_trace_parser.so.5.0 00:02:08.084 SYMLINK libspdk_trace_parser.so 00:02:08.084 CC lib/json/json_parse.o 00:02:08.084 CC lib/json/json_util.o 00:02:08.084 CC lib/json/json_write.o 00:02:08.084 CC lib/idxd/idxd.o 00:02:08.084 CC lib/idxd/idxd_user.o 00:02:08.084 CC lib/vmd/vmd.o 00:02:08.084 CC lib/idxd/idxd_kernel.o 00:02:08.084 CC lib/vmd/led.o 00:02:08.084 CC lib/rdma_utils/rdma_utils.o 00:02:08.084 CC lib/conf/conf.o 00:02:08.084 CC lib/rdma_provider/common.o 00:02:08.084 CC lib/env_dpdk/env.o 00:02:08.084 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:08.084 CC lib/env_dpdk/memory.o 00:02:08.084 CC lib/env_dpdk/pci.o 00:02:08.084 CC lib/env_dpdk/init.o 00:02:08.084 CC lib/env_dpdk/threads.o 00:02:08.084 CC lib/env_dpdk/pci_virtio.o 00:02:08.084 CC lib/env_dpdk/pci_ioat.o 00:02:08.084 CC lib/env_dpdk/pci_vmd.o 00:02:08.084 CC lib/env_dpdk/pci_idxd.o 00:02:08.084 CC lib/env_dpdk/pci_event.o 00:02:08.084 CC lib/env_dpdk/sigbus_handler.o 00:02:08.084 CC lib/env_dpdk/pci_dpdk.o 00:02:08.084 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:08.084 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:08.344 LIB libspdk_rdma_provider.a 00:02:08.344 LIB libspdk_conf.a 00:02:08.344 SO libspdk_rdma_provider.so.6.0 00:02:08.344 SO libspdk_conf.so.6.0 00:02:08.344 LIB libspdk_rdma_utils.a 00:02:08.344 LIB libspdk_json.a 00:02:08.344 SO libspdk_rdma_utils.so.1.0 00:02:08.344 SYMLINK libspdk_conf.so 00:02:08.344 SYMLINK libspdk_rdma_provider.so 00:02:08.605 SO libspdk_json.so.6.0 00:02:08.605 SYMLINK libspdk_rdma_utils.so 00:02:08.605 SYMLINK libspdk_json.so 00:02:08.605 LIB libspdk_idxd.a 00:02:08.605 SO libspdk_idxd.so.12.0 00:02:08.605 LIB libspdk_vmd.a 00:02:08.865 SO libspdk_vmd.so.6.0 00:02:08.865 SYMLINK libspdk_idxd.so 00:02:08.865 SYMLINK libspdk_vmd.so 00:02:08.865 CC lib/jsonrpc/jsonrpc_server.o 00:02:08.865 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:08.865 CC lib/jsonrpc/jsonrpc_client.o 00:02:08.865 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:09.125 LIB libspdk_jsonrpc.a 00:02:09.125 SO libspdk_jsonrpc.so.6.0 00:02:09.385 SYMLINK libspdk_jsonrpc.so 00:02:09.385 LIB libspdk_env_dpdk.a 00:02:09.385 SO libspdk_env_dpdk.so.14.1 00:02:09.645 SYMLINK libspdk_env_dpdk.so 00:02:09.645 CC lib/rpc/rpc.o 00:02:09.905 LIB libspdk_rpc.a 00:02:09.905 SO libspdk_rpc.so.6.0 00:02:09.905 SYMLINK libspdk_rpc.so 00:02:10.166 CC lib/trace/trace.o 00:02:10.166 CC lib/trace/trace_flags.o 00:02:10.166 CC lib/trace/trace_rpc.o 00:02:10.166 CC lib/notify/notify.o 00:02:10.166 CC lib/notify/notify_rpc.o 00:02:10.166 CC lib/keyring/keyring.o 00:02:10.166 CC lib/keyring/keyring_rpc.o 00:02:10.427 LIB libspdk_notify.a 00:02:10.427 SO libspdk_notify.so.6.0 00:02:10.427 LIB libspdk_keyring.a 00:02:10.427 LIB libspdk_trace.a 00:02:10.688 SO libspdk_keyring.so.1.0 00:02:10.688 SYMLINK libspdk_notify.so 00:02:10.688 SO libspdk_trace.so.10.0 00:02:10.688 SYMLINK libspdk_keyring.so 00:02:10.688 SYMLINK libspdk_trace.so 00:02:10.948 CC lib/thread/thread.o 00:02:10.948 CC lib/sock/sock.o 00:02:10.948 CC lib/thread/iobuf.o 00:02:10.948 CC lib/sock/sock_rpc.o 00:02:11.520 LIB libspdk_sock.a 00:02:11.520 SO libspdk_sock.so.10.0 00:02:11.520 SYMLINK libspdk_sock.so 00:02:11.780 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:11.780 CC lib/nvme/nvme_ctrlr.o 00:02:11.780 CC lib/nvme/nvme_fabric.o 00:02:11.780 CC lib/nvme/nvme_ns_cmd.o 00:02:11.780 CC lib/nvme/nvme_ns.o 00:02:11.780 CC lib/nvme/nvme_pcie_common.o 00:02:11.780 CC lib/nvme/nvme_pcie.o 00:02:11.780 CC lib/nvme/nvme_qpair.o 00:02:11.780 CC lib/nvme/nvme.o 00:02:11.780 CC lib/nvme/nvme_quirks.o 00:02:11.780 CC lib/nvme/nvme_transport.o 00:02:11.780 CC lib/nvme/nvme_discovery.o 00:02:11.780 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:11.780 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:11.780 CC lib/nvme/nvme_tcp.o 00:02:11.780 CC lib/nvme/nvme_opal.o 00:02:11.780 CC lib/nvme/nvme_io_msg.o 00:02:11.780 CC lib/nvme/nvme_poll_group.o 00:02:11.780 CC lib/nvme/nvme_zns.o 00:02:11.780 CC lib/nvme/nvme_stubs.o 00:02:11.780 CC lib/nvme/nvme_auth.o 00:02:11.780 CC lib/nvme/nvme_cuse.o 00:02:11.780 CC lib/nvme/nvme_vfio_user.o 00:02:11.780 CC lib/nvme/nvme_rdma.o 00:02:12.350 LIB libspdk_thread.a 00:02:12.350 SO libspdk_thread.so.10.1 00:02:12.350 SYMLINK libspdk_thread.so 00:02:12.611 CC lib/blob/blobstore.o 00:02:12.611 CC lib/blob/zeroes.o 00:02:12.611 CC lib/blob/request.o 00:02:12.611 CC lib/blob/blob_bs_dev.o 00:02:12.611 CC lib/accel/accel.o 00:02:12.611 CC lib/accel/accel_rpc.o 00:02:12.611 CC lib/accel/accel_sw.o 00:02:12.611 CC lib/virtio/virtio.o 00:02:12.611 CC lib/virtio/virtio_vhost_user.o 00:02:12.611 CC lib/virtio/virtio_pci.o 00:02:12.611 CC lib/virtio/virtio_vfio_user.o 00:02:12.611 CC lib/init/json_config.o 00:02:12.611 CC lib/init/subsystem.o 00:02:12.611 CC lib/init/subsystem_rpc.o 00:02:12.611 CC lib/init/rpc.o 00:02:12.611 CC lib/vfu_tgt/tgt_endpoint.o 00:02:12.611 CC lib/vfu_tgt/tgt_rpc.o 00:02:12.872 LIB libspdk_init.a 00:02:12.872 SO libspdk_init.so.5.0 00:02:12.872 LIB libspdk_virtio.a 00:02:13.133 LIB libspdk_vfu_tgt.a 00:02:13.133 SO libspdk_virtio.so.7.0 00:02:13.133 SO libspdk_vfu_tgt.so.3.0 00:02:13.133 SYMLINK libspdk_init.so 00:02:13.133 SYMLINK libspdk_virtio.so 00:02:13.133 SYMLINK libspdk_vfu_tgt.so 00:02:13.394 CC lib/event/app.o 00:02:13.394 CC lib/event/reactor.o 00:02:13.394 CC lib/event/log_rpc.o 00:02:13.394 CC lib/event/app_rpc.o 00:02:13.394 CC lib/event/scheduler_static.o 00:02:13.655 LIB libspdk_accel.a 00:02:13.655 SO libspdk_accel.so.15.1 00:02:13.655 LIB libspdk_nvme.a 00:02:13.655 SYMLINK libspdk_accel.so 00:02:13.655 SO libspdk_nvme.so.13.1 00:02:13.918 LIB libspdk_event.a 00:02:13.918 SO libspdk_event.so.14.0 00:02:13.918 SYMLINK libspdk_event.so 00:02:13.918 CC lib/bdev/bdev.o 00:02:13.918 CC lib/bdev/bdev_rpc.o 00:02:13.918 CC lib/bdev/bdev_zone.o 00:02:13.918 CC lib/bdev/part.o 00:02:13.918 CC lib/bdev/scsi_nvme.o 00:02:14.180 SYMLINK libspdk_nvme.so 00:02:15.123 LIB libspdk_blob.a 00:02:15.384 SO libspdk_blob.so.11.0 00:02:15.384 SYMLINK libspdk_blob.so 00:02:15.645 CC lib/lvol/lvol.o 00:02:15.645 CC lib/blobfs/blobfs.o 00:02:15.645 CC lib/blobfs/tree.o 00:02:16.217 LIB libspdk_bdev.a 00:02:16.217 SO libspdk_bdev.so.15.1 00:02:16.478 SYMLINK libspdk_bdev.so 00:02:16.478 LIB libspdk_blobfs.a 00:02:16.478 SO libspdk_blobfs.so.10.0 00:02:16.478 LIB libspdk_lvol.a 00:02:16.478 SO libspdk_lvol.so.10.0 00:02:16.478 SYMLINK libspdk_blobfs.so 00:02:16.739 SYMLINK libspdk_lvol.so 00:02:16.739 CC lib/nbd/nbd.o 00:02:16.739 CC lib/nbd/nbd_rpc.o 00:02:16.739 CC lib/scsi/dev.o 00:02:16.739 CC lib/ublk/ublk.o 00:02:16.739 CC lib/scsi/lun.o 00:02:16.739 CC lib/nvmf/ctrlr.o 00:02:16.739 CC lib/nvmf/ctrlr_bdev.o 00:02:16.739 CC lib/ublk/ublk_rpc.o 00:02:16.739 CC lib/scsi/port.o 00:02:16.739 CC lib/nvmf/ctrlr_discovery.o 00:02:16.739 CC lib/scsi/scsi.o 00:02:16.739 CC lib/scsi/scsi_bdev.o 00:02:16.739 CC lib/ftl/ftl_core.o 00:02:16.739 CC lib/nvmf/subsystem.o 00:02:16.739 CC lib/scsi/scsi_pr.o 00:02:16.739 CC lib/ftl/ftl_init.o 00:02:16.739 CC lib/nvmf/nvmf.o 00:02:16.739 CC lib/scsi/scsi_rpc.o 00:02:16.739 CC lib/ftl/ftl_layout.o 00:02:16.739 CC lib/nvmf/nvmf_rpc.o 00:02:16.739 CC lib/scsi/task.o 00:02:16.739 CC lib/nvmf/transport.o 00:02:16.739 CC lib/ftl/ftl_debug.o 00:02:16.739 CC lib/ftl/ftl_io.o 00:02:16.739 CC lib/nvmf/tcp.o 00:02:16.739 CC lib/ftl/ftl_sb.o 00:02:16.739 CC lib/nvmf/stubs.o 00:02:16.739 CC lib/ftl/ftl_l2p.o 00:02:16.739 CC lib/nvmf/mdns_server.o 00:02:16.739 CC lib/ftl/ftl_l2p_flat.o 00:02:16.739 CC lib/nvmf/vfio_user.o 00:02:16.739 CC lib/ftl/ftl_nv_cache.o 00:02:16.739 CC lib/nvmf/rdma.o 00:02:16.739 CC lib/nvmf/auth.o 00:02:16.739 CC lib/ftl/ftl_band.o 00:02:16.739 CC lib/ftl/ftl_band_ops.o 00:02:16.739 CC lib/ftl/ftl_writer.o 00:02:16.739 CC lib/ftl/ftl_rq.o 00:02:16.739 CC lib/ftl/ftl_reloc.o 00:02:16.739 CC lib/ftl/ftl_l2p_cache.o 00:02:16.739 CC lib/ftl/ftl_p2l.o 00:02:16.739 CC lib/ftl/mngt/ftl_mngt.o 00:02:16.739 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:16.739 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:16.739 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:16.739 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:16.739 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:16.739 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:16.739 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:16.739 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:16.739 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:16.739 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:16.739 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:16.739 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:16.739 CC lib/ftl/utils/ftl_conf.o 00:02:16.739 CC lib/ftl/utils/ftl_md.o 00:02:16.739 CC lib/ftl/utils/ftl_mempool.o 00:02:16.739 CC lib/ftl/utils/ftl_property.o 00:02:16.739 CC lib/ftl/utils/ftl_bitmap.o 00:02:16.739 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:16.739 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:16.739 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:16.739 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:16.739 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:16.739 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:16.739 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:16.739 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:16.739 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:16.739 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:16.739 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:16.739 CC lib/ftl/base/ftl_base_dev.o 00:02:16.739 CC lib/ftl/ftl_trace.o 00:02:16.739 CC lib/ftl/base/ftl_base_bdev.o 00:02:17.305 LIB libspdk_nbd.a 00:02:17.305 SO libspdk_nbd.so.7.0 00:02:17.305 LIB libspdk_scsi.a 00:02:17.305 SYMLINK libspdk_nbd.so 00:02:17.305 SO libspdk_scsi.so.9.0 00:02:17.567 LIB libspdk_ublk.a 00:02:17.567 SYMLINK libspdk_scsi.so 00:02:17.567 SO libspdk_ublk.so.3.0 00:02:17.567 SYMLINK libspdk_ublk.so 00:02:17.828 LIB libspdk_ftl.a 00:02:17.828 CC lib/vhost/vhost.o 00:02:17.828 CC lib/vhost/vhost_rpc.o 00:02:17.828 CC lib/vhost/vhost_scsi.o 00:02:17.828 CC lib/vhost/vhost_blk.o 00:02:17.828 CC lib/vhost/rte_vhost_user.o 00:02:17.828 CC lib/iscsi/conn.o 00:02:17.828 CC lib/iscsi/init_grp.o 00:02:17.828 CC lib/iscsi/md5.o 00:02:17.828 CC lib/iscsi/param.o 00:02:17.828 CC lib/iscsi/iscsi.o 00:02:17.828 CC lib/iscsi/portal_grp.o 00:02:17.828 CC lib/iscsi/tgt_node.o 00:02:17.828 CC lib/iscsi/iscsi_subsystem.o 00:02:17.828 CC lib/iscsi/iscsi_rpc.o 00:02:17.828 CC lib/iscsi/task.o 00:02:18.089 SO libspdk_ftl.so.9.0 00:02:18.350 SYMLINK libspdk_ftl.so 00:02:18.610 LIB libspdk_nvmf.a 00:02:18.871 SO libspdk_nvmf.so.18.1 00:02:18.871 LIB libspdk_vhost.a 00:02:18.871 SO libspdk_vhost.so.8.0 00:02:18.871 SYMLINK libspdk_nvmf.so 00:02:18.871 SYMLINK libspdk_vhost.so 00:02:19.131 LIB libspdk_iscsi.a 00:02:19.131 SO libspdk_iscsi.so.8.0 00:02:19.391 SYMLINK libspdk_iscsi.so 00:02:20.007 CC module/env_dpdk/env_dpdk_rpc.o 00:02:20.007 CC module/vfu_device/vfu_virtio.o 00:02:20.007 CC module/vfu_device/vfu_virtio_blk.o 00:02:20.007 CC module/vfu_device/vfu_virtio_scsi.o 00:02:20.007 CC module/vfu_device/vfu_virtio_rpc.o 00:02:20.007 LIB libspdk_env_dpdk_rpc.a 00:02:20.007 CC module/accel/error/accel_error.o 00:02:20.007 CC module/accel/error/accel_error_rpc.o 00:02:20.007 CC module/accel/iaa/accel_iaa.o 00:02:20.007 CC module/blob/bdev/blob_bdev.o 00:02:20.007 CC module/accel/iaa/accel_iaa_rpc.o 00:02:20.007 CC module/keyring/linux/keyring.o 00:02:20.007 CC module/accel/dsa/accel_dsa.o 00:02:20.007 CC module/keyring/file/keyring.o 00:02:20.007 CC module/keyring/linux/keyring_rpc.o 00:02:20.007 CC module/keyring/file/keyring_rpc.o 00:02:20.007 SO libspdk_env_dpdk_rpc.so.6.0 00:02:20.007 CC module/accel/dsa/accel_dsa_rpc.o 00:02:20.007 CC module/sock/posix/posix.o 00:02:20.007 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:20.007 CC module/accel/ioat/accel_ioat.o 00:02:20.007 CC module/accel/ioat/accel_ioat_rpc.o 00:02:20.007 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:20.007 CC module/scheduler/gscheduler/gscheduler.o 00:02:20.007 SYMLINK libspdk_env_dpdk_rpc.so 00:02:20.268 LIB libspdk_keyring_linux.a 00:02:20.268 LIB libspdk_keyring_file.a 00:02:20.268 LIB libspdk_scheduler_dpdk_governor.a 00:02:20.268 LIB libspdk_scheduler_gscheduler.a 00:02:20.268 SO libspdk_keyring_linux.so.1.0 00:02:20.268 SO libspdk_keyring_file.so.1.0 00:02:20.268 LIB libspdk_accel_error.a 00:02:20.268 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:20.268 LIB libspdk_scheduler_dynamic.a 00:02:20.268 LIB libspdk_accel_ioat.a 00:02:20.268 SO libspdk_scheduler_gscheduler.so.4.0 00:02:20.268 LIB libspdk_accel_iaa.a 00:02:20.268 SO libspdk_accel_error.so.2.0 00:02:20.268 SO libspdk_scheduler_dynamic.so.4.0 00:02:20.268 LIB libspdk_accel_dsa.a 00:02:20.268 SO libspdk_accel_ioat.so.6.0 00:02:20.268 SO libspdk_accel_iaa.so.3.0 00:02:20.268 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:20.268 SYMLINK libspdk_keyring_linux.so 00:02:20.268 LIB libspdk_blob_bdev.a 00:02:20.268 SYMLINK libspdk_keyring_file.so 00:02:20.268 SYMLINK libspdk_scheduler_gscheduler.so 00:02:20.268 SYMLINK libspdk_accel_error.so 00:02:20.268 SO libspdk_accel_dsa.so.5.0 00:02:20.268 SO libspdk_blob_bdev.so.11.0 00:02:20.268 SYMLINK libspdk_scheduler_dynamic.so 00:02:20.268 SYMLINK libspdk_accel_ioat.so 00:02:20.530 SYMLINK libspdk_accel_iaa.so 00:02:20.530 SYMLINK libspdk_accel_dsa.so 00:02:20.530 LIB libspdk_vfu_device.a 00:02:20.530 SYMLINK libspdk_blob_bdev.so 00:02:20.530 SO libspdk_vfu_device.so.3.0 00:02:20.530 SYMLINK libspdk_vfu_device.so 00:02:20.792 LIB libspdk_sock_posix.a 00:02:20.792 SO libspdk_sock_posix.so.6.0 00:02:20.792 SYMLINK libspdk_sock_posix.so 00:02:21.051 CC module/bdev/gpt/gpt.o 00:02:21.051 CC module/bdev/gpt/vbdev_gpt.o 00:02:21.051 CC module/blobfs/bdev/blobfs_bdev.o 00:02:21.051 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:21.051 CC module/bdev/delay/vbdev_delay.o 00:02:21.051 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:21.051 CC module/bdev/aio/bdev_aio.o 00:02:21.051 CC module/bdev/aio/bdev_aio_rpc.o 00:02:21.051 CC module/bdev/error/vbdev_error.o 00:02:21.051 CC module/bdev/error/vbdev_error_rpc.o 00:02:21.051 CC module/bdev/null/bdev_null.o 00:02:21.051 CC module/bdev/null/bdev_null_rpc.o 00:02:21.051 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:21.051 CC module/bdev/lvol/vbdev_lvol.o 00:02:21.051 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:21.051 CC module/bdev/nvme/bdev_nvme.o 00:02:21.051 CC module/bdev/raid/bdev_raid.o 00:02:21.051 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:21.051 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:21.051 CC module/bdev/nvme/nvme_rpc.o 00:02:21.051 CC module/bdev/nvme/bdev_mdns_client.o 00:02:21.051 CC module/bdev/raid/bdev_raid_rpc.o 00:02:21.051 CC module/bdev/raid/bdev_raid_sb.o 00:02:21.051 CC module/bdev/nvme/vbdev_opal.o 00:02:21.051 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:21.051 CC module/bdev/raid/raid0.o 00:02:21.051 CC module/bdev/raid/raid1.o 00:02:21.051 CC module/bdev/passthru/vbdev_passthru.o 00:02:21.051 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:21.051 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:21.051 CC module/bdev/malloc/bdev_malloc.o 00:02:21.051 CC module/bdev/raid/concat.o 00:02:21.051 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:21.051 CC module/bdev/ftl/bdev_ftl.o 00:02:21.051 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:21.051 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:21.051 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:21.051 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:21.051 CC module/bdev/iscsi/bdev_iscsi.o 00:02:21.051 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:21.051 CC module/bdev/split/vbdev_split.o 00:02:21.051 CC module/bdev/split/vbdev_split_rpc.o 00:02:21.311 LIB libspdk_blobfs_bdev.a 00:02:21.311 SO libspdk_blobfs_bdev.so.6.0 00:02:21.311 LIB libspdk_bdev_error.a 00:02:21.311 SYMLINK libspdk_blobfs_bdev.so 00:02:21.311 LIB libspdk_bdev_gpt.a 00:02:21.311 LIB libspdk_bdev_null.a 00:02:21.311 LIB libspdk_bdev_split.a 00:02:21.311 SO libspdk_bdev_error.so.6.0 00:02:21.311 LIB libspdk_bdev_ftl.a 00:02:21.311 SO libspdk_bdev_null.so.6.0 00:02:21.311 LIB libspdk_bdev_passthru.a 00:02:21.311 SO libspdk_bdev_gpt.so.6.0 00:02:21.311 LIB libspdk_bdev_zone_block.a 00:02:21.311 SO libspdk_bdev_split.so.6.0 00:02:21.311 LIB libspdk_bdev_aio.a 00:02:21.311 SO libspdk_bdev_ftl.so.6.0 00:02:21.311 SO libspdk_bdev_passthru.so.6.0 00:02:21.311 SYMLINK libspdk_bdev_error.so 00:02:21.311 LIB libspdk_bdev_delay.a 00:02:21.311 LIB libspdk_bdev_iscsi.a 00:02:21.311 SO libspdk_bdev_zone_block.so.6.0 00:02:21.311 SYMLINK libspdk_bdev_null.so 00:02:21.312 SO libspdk_bdev_aio.so.6.0 00:02:21.312 SYMLINK libspdk_bdev_split.so 00:02:21.312 LIB libspdk_bdev_malloc.a 00:02:21.312 SYMLINK libspdk_bdev_gpt.so 00:02:21.312 SYMLINK libspdk_bdev_ftl.so 00:02:21.625 SO libspdk_bdev_delay.so.6.0 00:02:21.625 SO libspdk_bdev_iscsi.so.6.0 00:02:21.625 SYMLINK libspdk_bdev_passthru.so 00:02:21.625 SO libspdk_bdev_malloc.so.6.0 00:02:21.625 SYMLINK libspdk_bdev_zone_block.so 00:02:21.625 SYMLINK libspdk_bdev_aio.so 00:02:21.625 LIB libspdk_bdev_lvol.a 00:02:21.625 SYMLINK libspdk_bdev_delay.so 00:02:21.625 SYMLINK libspdk_bdev_iscsi.so 00:02:21.625 SYMLINK libspdk_bdev_malloc.so 00:02:21.625 SO libspdk_bdev_lvol.so.6.0 00:02:21.625 LIB libspdk_bdev_virtio.a 00:02:21.625 SO libspdk_bdev_virtio.so.6.0 00:02:21.625 SYMLINK libspdk_bdev_lvol.so 00:02:21.625 SYMLINK libspdk_bdev_virtio.so 00:02:21.885 LIB libspdk_bdev_raid.a 00:02:21.885 SO libspdk_bdev_raid.so.6.0 00:02:22.145 SYMLINK libspdk_bdev_raid.so 00:02:23.087 LIB libspdk_bdev_nvme.a 00:02:23.087 SO libspdk_bdev_nvme.so.7.0 00:02:23.087 SYMLINK libspdk_bdev_nvme.so 00:02:23.658 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:23.658 CC module/event/subsystems/iobuf/iobuf.o 00:02:23.658 CC module/event/subsystems/vmd/vmd.o 00:02:23.658 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:23.658 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:23.658 CC module/event/subsystems/keyring/keyring.o 00:02:23.658 CC module/event/subsystems/scheduler/scheduler.o 00:02:23.658 CC module/event/subsystems/sock/sock.o 00:02:23.658 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:23.919 LIB libspdk_event_keyring.a 00:02:23.919 LIB libspdk_event_vfu_tgt.a 00:02:23.919 LIB libspdk_event_vmd.a 00:02:23.919 LIB libspdk_event_vhost_blk.a 00:02:23.919 LIB libspdk_event_scheduler.a 00:02:23.919 LIB libspdk_event_iobuf.a 00:02:23.919 LIB libspdk_event_sock.a 00:02:23.919 SO libspdk_event_vfu_tgt.so.3.0 00:02:23.919 SO libspdk_event_keyring.so.1.0 00:02:23.919 SO libspdk_event_vmd.so.6.0 00:02:23.919 SO libspdk_event_vhost_blk.so.3.0 00:02:23.919 SO libspdk_event_sock.so.5.0 00:02:23.919 SO libspdk_event_scheduler.so.4.0 00:02:23.919 SO libspdk_event_iobuf.so.3.0 00:02:23.919 SYMLINK libspdk_event_keyring.so 00:02:23.919 SYMLINK libspdk_event_vfu_tgt.so 00:02:23.919 SYMLINK libspdk_event_vhost_blk.so 00:02:23.919 SYMLINK libspdk_event_vmd.so 00:02:23.919 SYMLINK libspdk_event_sock.so 00:02:23.919 SYMLINK libspdk_event_scheduler.so 00:02:23.919 SYMLINK libspdk_event_iobuf.so 00:02:24.490 CC module/event/subsystems/accel/accel.o 00:02:24.490 LIB libspdk_event_accel.a 00:02:24.490 SO libspdk_event_accel.so.6.0 00:02:24.750 SYMLINK libspdk_event_accel.so 00:02:25.011 CC module/event/subsystems/bdev/bdev.o 00:02:25.272 LIB libspdk_event_bdev.a 00:02:25.272 SO libspdk_event_bdev.so.6.0 00:02:25.272 SYMLINK libspdk_event_bdev.so 00:02:25.534 CC module/event/subsystems/scsi/scsi.o 00:02:25.534 CC module/event/subsystems/nbd/nbd.o 00:02:25.534 CC module/event/subsystems/ublk/ublk.o 00:02:25.534 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:25.534 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:25.795 LIB libspdk_event_scsi.a 00:02:25.795 LIB libspdk_event_nbd.a 00:02:25.795 LIB libspdk_event_ublk.a 00:02:25.795 SO libspdk_event_scsi.so.6.0 00:02:25.795 SO libspdk_event_nbd.so.6.0 00:02:25.795 SO libspdk_event_ublk.so.3.0 00:02:25.795 LIB libspdk_event_nvmf.a 00:02:25.795 SYMLINK libspdk_event_scsi.so 00:02:25.795 SYMLINK libspdk_event_nbd.so 00:02:25.795 SO libspdk_event_nvmf.so.6.0 00:02:25.795 SYMLINK libspdk_event_ublk.so 00:02:26.057 SYMLINK libspdk_event_nvmf.so 00:02:26.319 CC module/event/subsystems/iscsi/iscsi.o 00:02:26.319 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:26.319 LIB libspdk_event_vhost_scsi.a 00:02:26.319 LIB libspdk_event_iscsi.a 00:02:26.319 SO libspdk_event_vhost_scsi.so.3.0 00:02:26.319 SO libspdk_event_iscsi.so.6.0 00:02:26.580 SYMLINK libspdk_event_vhost_scsi.so 00:02:26.580 SYMLINK libspdk_event_iscsi.so 00:02:26.580 SO libspdk.so.6.0 00:02:26.580 SYMLINK libspdk.so 00:02:27.153 CC app/trace_record/trace_record.o 00:02:27.153 CC test/rpc_client/rpc_client_test.o 00:02:27.153 CC app/spdk_top/spdk_top.o 00:02:27.153 TEST_HEADER include/spdk/accel_module.h 00:02:27.153 TEST_HEADER include/spdk/accel.h 00:02:27.153 CXX app/trace/trace.o 00:02:27.153 TEST_HEADER include/spdk/barrier.h 00:02:27.153 TEST_HEADER include/spdk/assert.h 00:02:27.153 CC app/spdk_lspci/spdk_lspci.o 00:02:27.153 CC app/spdk_nvme_identify/identify.o 00:02:27.153 CC app/spdk_nvme_perf/perf.o 00:02:27.153 CC app/spdk_nvme_discover/discovery_aer.o 00:02:27.153 TEST_HEADER include/spdk/base64.h 00:02:27.153 TEST_HEADER include/spdk/bdev.h 00:02:27.153 TEST_HEADER include/spdk/bdev_module.h 00:02:27.153 TEST_HEADER include/spdk/bdev_zone.h 00:02:27.153 TEST_HEADER include/spdk/bit_pool.h 00:02:27.153 TEST_HEADER include/spdk/bit_array.h 00:02:27.153 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:27.153 TEST_HEADER include/spdk/blob_bdev.h 00:02:27.153 TEST_HEADER include/spdk/blobfs.h 00:02:27.153 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:27.153 TEST_HEADER include/spdk/blob.h 00:02:27.153 TEST_HEADER include/spdk/conf.h 00:02:27.153 TEST_HEADER include/spdk/config.h 00:02:27.153 TEST_HEADER include/spdk/cpuset.h 00:02:27.153 TEST_HEADER include/spdk/crc16.h 00:02:27.153 TEST_HEADER include/spdk/crc32.h 00:02:27.153 TEST_HEADER include/spdk/crc64.h 00:02:27.153 TEST_HEADER include/spdk/dif.h 00:02:27.153 TEST_HEADER include/spdk/dma.h 00:02:27.153 TEST_HEADER include/spdk/endian.h 00:02:27.153 CC app/nvmf_tgt/nvmf_main.o 00:02:27.153 TEST_HEADER include/spdk/env_dpdk.h 00:02:27.153 TEST_HEADER include/spdk/env.h 00:02:27.153 CC app/spdk_dd/spdk_dd.o 00:02:27.153 TEST_HEADER include/spdk/event.h 00:02:27.153 TEST_HEADER include/spdk/fd_group.h 00:02:27.153 TEST_HEADER include/spdk/fd.h 00:02:27.153 TEST_HEADER include/spdk/file.h 00:02:27.153 TEST_HEADER include/spdk/ftl.h 00:02:27.153 TEST_HEADER include/spdk/gpt_spec.h 00:02:27.153 TEST_HEADER include/spdk/hexlify.h 00:02:27.153 TEST_HEADER include/spdk/histogram_data.h 00:02:27.153 CC app/iscsi_tgt/iscsi_tgt.o 00:02:27.153 TEST_HEADER include/spdk/idxd.h 00:02:27.153 TEST_HEADER include/spdk/idxd_spec.h 00:02:27.153 TEST_HEADER include/spdk/ioat.h 00:02:27.153 TEST_HEADER include/spdk/init.h 00:02:27.153 TEST_HEADER include/spdk/iscsi_spec.h 00:02:27.153 TEST_HEADER include/spdk/ioat_spec.h 00:02:27.153 TEST_HEADER include/spdk/json.h 00:02:27.153 TEST_HEADER include/spdk/keyring.h 00:02:27.153 TEST_HEADER include/spdk/jsonrpc.h 00:02:27.153 TEST_HEADER include/spdk/keyring_module.h 00:02:27.153 TEST_HEADER include/spdk/likely.h 00:02:27.153 TEST_HEADER include/spdk/lvol.h 00:02:27.153 TEST_HEADER include/spdk/log.h 00:02:27.153 TEST_HEADER include/spdk/memory.h 00:02:27.153 TEST_HEADER include/spdk/mmio.h 00:02:27.153 CC app/spdk_tgt/spdk_tgt.o 00:02:27.153 TEST_HEADER include/spdk/nbd.h 00:02:27.153 TEST_HEADER include/spdk/notify.h 00:02:27.153 TEST_HEADER include/spdk/nvme.h 00:02:27.153 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:27.153 TEST_HEADER include/spdk/nvme_intel.h 00:02:27.153 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:27.153 TEST_HEADER include/spdk/nvme_zns.h 00:02:27.153 TEST_HEADER include/spdk/nvme_spec.h 00:02:27.153 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:27.153 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:27.153 TEST_HEADER include/spdk/nvmf.h 00:02:27.153 TEST_HEADER include/spdk/nvmf_spec.h 00:02:27.153 TEST_HEADER include/spdk/opal_spec.h 00:02:27.153 TEST_HEADER include/spdk/pci_ids.h 00:02:27.153 TEST_HEADER include/spdk/opal.h 00:02:27.153 TEST_HEADER include/spdk/nvmf_transport.h 00:02:27.153 TEST_HEADER include/spdk/pipe.h 00:02:27.153 TEST_HEADER include/spdk/queue.h 00:02:27.153 TEST_HEADER include/spdk/reduce.h 00:02:27.153 TEST_HEADER include/spdk/scheduler.h 00:02:27.153 TEST_HEADER include/spdk/scsi.h 00:02:27.153 TEST_HEADER include/spdk/rpc.h 00:02:27.153 TEST_HEADER include/spdk/scsi_spec.h 00:02:27.153 TEST_HEADER include/spdk/sock.h 00:02:27.153 TEST_HEADER include/spdk/stdinc.h 00:02:27.153 TEST_HEADER include/spdk/string.h 00:02:27.153 TEST_HEADER include/spdk/thread.h 00:02:27.153 TEST_HEADER include/spdk/trace.h 00:02:27.153 TEST_HEADER include/spdk/trace_parser.h 00:02:27.153 TEST_HEADER include/spdk/tree.h 00:02:27.153 TEST_HEADER include/spdk/util.h 00:02:27.153 TEST_HEADER include/spdk/ublk.h 00:02:27.153 TEST_HEADER include/spdk/uuid.h 00:02:27.153 TEST_HEADER include/spdk/version.h 00:02:27.153 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:27.153 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:27.153 TEST_HEADER include/spdk/vhost.h 00:02:27.153 TEST_HEADER include/spdk/xor.h 00:02:27.153 TEST_HEADER include/spdk/vmd.h 00:02:27.153 TEST_HEADER include/spdk/zipf.h 00:02:27.153 CXX test/cpp_headers/accel.o 00:02:27.153 CXX test/cpp_headers/accel_module.o 00:02:27.153 CXX test/cpp_headers/assert.o 00:02:27.153 CXX test/cpp_headers/barrier.o 00:02:27.153 CXX test/cpp_headers/base64.o 00:02:27.153 CXX test/cpp_headers/bdev.o 00:02:27.153 CXX test/cpp_headers/bdev_zone.o 00:02:27.153 CXX test/cpp_headers/bdev_module.o 00:02:27.153 CXX test/cpp_headers/bit_array.o 00:02:27.153 CXX test/cpp_headers/bit_pool.o 00:02:27.153 CXX test/cpp_headers/blob_bdev.o 00:02:27.153 CXX test/cpp_headers/blobfs_bdev.o 00:02:27.153 CXX test/cpp_headers/blobfs.o 00:02:27.153 CXX test/cpp_headers/conf.o 00:02:27.153 CXX test/cpp_headers/blob.o 00:02:27.153 CXX test/cpp_headers/config.o 00:02:27.153 CXX test/cpp_headers/cpuset.o 00:02:27.153 CXX test/cpp_headers/crc16.o 00:02:27.153 CXX test/cpp_headers/crc32.o 00:02:27.153 CXX test/cpp_headers/crc64.o 00:02:27.153 CXX test/cpp_headers/dif.o 00:02:27.153 CXX test/cpp_headers/endian.o 00:02:27.153 CXX test/cpp_headers/dma.o 00:02:27.153 CXX test/cpp_headers/env.o 00:02:27.153 CXX test/cpp_headers/event.o 00:02:27.153 CXX test/cpp_headers/fd_group.o 00:02:27.153 CXX test/cpp_headers/fd.o 00:02:27.153 CXX test/cpp_headers/ftl.o 00:02:27.153 CXX test/cpp_headers/env_dpdk.o 00:02:27.153 CXX test/cpp_headers/file.o 00:02:27.414 CXX test/cpp_headers/gpt_spec.o 00:02:27.414 CXX test/cpp_headers/hexlify.o 00:02:27.414 CXX test/cpp_headers/idxd.o 00:02:27.414 CXX test/cpp_headers/init.o 00:02:27.414 CXX test/cpp_headers/idxd_spec.o 00:02:27.414 CXX test/cpp_headers/ioat.o 00:02:27.414 CXX test/cpp_headers/histogram_data.o 00:02:27.414 CXX test/cpp_headers/iscsi_spec.o 00:02:27.414 CXX test/cpp_headers/jsonrpc.o 00:02:27.414 CXX test/cpp_headers/ioat_spec.o 00:02:27.414 CXX test/cpp_headers/keyring.o 00:02:27.414 CXX test/cpp_headers/keyring_module.o 00:02:27.414 CXX test/cpp_headers/json.o 00:02:27.414 CXX test/cpp_headers/lvol.o 00:02:27.414 CXX test/cpp_headers/log.o 00:02:27.414 CXX test/cpp_headers/nbd.o 00:02:27.414 CXX test/cpp_headers/likely.o 00:02:27.414 CXX test/cpp_headers/notify.o 00:02:27.414 CXX test/cpp_headers/nvme_intel.o 00:02:27.414 CXX test/cpp_headers/memory.o 00:02:27.414 CXX test/cpp_headers/mmio.o 00:02:27.414 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:27.414 CXX test/cpp_headers/nvme.o 00:02:27.414 CXX test/cpp_headers/nvme_ocssd.o 00:02:27.414 CC test/env/vtophys/vtophys.o 00:02:27.414 CC test/env/memory/memory_ut.o 00:02:27.414 CXX test/cpp_headers/nvme_spec.o 00:02:27.414 CXX test/cpp_headers/nvme_zns.o 00:02:27.414 CC test/app/stub/stub.o 00:02:27.414 CXX test/cpp_headers/nvmf_cmd.o 00:02:27.414 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:27.414 CC test/app/jsoncat/jsoncat.o 00:02:27.414 CXX test/cpp_headers/nvmf.o 00:02:27.414 CXX test/cpp_headers/nvmf_spec.o 00:02:27.414 CXX test/cpp_headers/nvmf_transport.o 00:02:27.414 CXX test/cpp_headers/opal.o 00:02:27.414 CXX test/cpp_headers/pci_ids.o 00:02:27.414 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:27.414 CXX test/cpp_headers/opal_spec.o 00:02:27.414 CXX test/cpp_headers/queue.o 00:02:27.414 CC examples/util/zipf/zipf.o 00:02:27.414 CXX test/cpp_headers/rpc.o 00:02:27.414 CXX test/cpp_headers/reduce.o 00:02:27.414 CXX test/cpp_headers/pipe.o 00:02:27.414 CC examples/ioat/perf/perf.o 00:02:27.414 CXX test/cpp_headers/scheduler.o 00:02:27.414 LINK rpc_client_test 00:02:27.414 CXX test/cpp_headers/scsi.o 00:02:27.414 CXX test/cpp_headers/sock.o 00:02:27.414 CXX test/cpp_headers/scsi_spec.o 00:02:27.414 CXX test/cpp_headers/stdinc.o 00:02:27.414 CXX test/cpp_headers/string.o 00:02:27.414 CXX test/cpp_headers/thread.o 00:02:27.414 CXX test/cpp_headers/trace.o 00:02:27.414 CXX test/cpp_headers/tree.o 00:02:27.414 CXX test/cpp_headers/ublk.o 00:02:27.414 CXX test/cpp_headers/trace_parser.o 00:02:27.414 CC examples/ioat/verify/verify.o 00:02:27.414 CC test/app/histogram_perf/histogram_perf.o 00:02:27.414 CXX test/cpp_headers/util.o 00:02:27.414 CXX test/cpp_headers/uuid.o 00:02:27.414 CXX test/cpp_headers/version.o 00:02:27.414 CXX test/cpp_headers/vfio_user_spec.o 00:02:27.414 CC test/env/pci/pci_ut.o 00:02:27.414 CXX test/cpp_headers/vhost.o 00:02:27.414 CXX test/cpp_headers/vmd.o 00:02:27.414 CXX test/cpp_headers/vfio_user_pci.o 00:02:27.414 CXX test/cpp_headers/xor.o 00:02:27.414 CXX test/cpp_headers/zipf.o 00:02:27.414 CC app/fio/nvme/fio_plugin.o 00:02:27.414 CC test/thread/poller_perf/poller_perf.o 00:02:27.414 LINK spdk_lspci 00:02:27.414 LINK spdk_nvme_discover 00:02:27.414 CC app/fio/bdev/fio_plugin.o 00:02:27.414 CC test/dma/test_dma/test_dma.o 00:02:27.414 CC test/app/bdev_svc/bdev_svc.o 00:02:27.675 LINK nvmf_tgt 00:02:27.675 LINK interrupt_tgt 00:02:27.675 LINK spdk_trace_record 00:02:27.675 LINK iscsi_tgt 00:02:27.675 LINK spdk_tgt 00:02:27.675 CC test/env/mem_callbacks/mem_callbacks.o 00:02:27.675 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:27.675 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:27.675 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:27.675 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:27.934 LINK env_dpdk_post_init 00:02:27.934 LINK stub 00:02:27.934 LINK jsoncat 00:02:27.934 LINK spdk_dd 00:02:27.934 LINK bdev_svc 00:02:27.934 LINK vtophys 00:02:27.934 LINK poller_perf 00:02:28.195 LINK zipf 00:02:28.195 LINK ioat_perf 00:02:28.195 LINK histogram_perf 00:02:28.195 LINK verify 00:02:28.195 LINK spdk_trace 00:02:28.455 LINK vhost_fuzz 00:02:28.455 LINK nvme_fuzz 00:02:28.455 LINK test_dma 00:02:28.455 LINK spdk_top 00:02:28.455 LINK pci_ut 00:02:28.455 LINK spdk_nvme 00:02:28.455 LINK spdk_bdev 00:02:28.455 LINK spdk_nvme_identify 00:02:28.455 CC test/event/event_perf/event_perf.o 00:02:28.455 CC test/event/reactor_perf/reactor_perf.o 00:02:28.455 LINK spdk_nvme_perf 00:02:28.455 CC test/event/reactor/reactor.o 00:02:28.455 CC examples/vmd/lsvmd/lsvmd.o 00:02:28.455 LINK mem_callbacks 00:02:28.455 CC examples/vmd/led/led.o 00:02:28.455 CC examples/idxd/perf/perf.o 00:02:28.455 CC examples/sock/hello_world/hello_sock.o 00:02:28.455 CC test/event/app_repeat/app_repeat.o 00:02:28.715 CC examples/thread/thread/thread_ex.o 00:02:28.715 CC test/event/scheduler/scheduler.o 00:02:28.715 CC app/vhost/vhost.o 00:02:28.715 LINK lsvmd 00:02:28.716 LINK reactor 00:02:28.716 LINK reactor_perf 00:02:28.716 LINK event_perf 00:02:28.716 LINK led 00:02:28.716 LINK app_repeat 00:02:28.716 LINK hello_sock 00:02:28.976 LINK thread 00:02:28.976 LINK idxd_perf 00:02:28.976 LINK vhost 00:02:28.976 LINK scheduler 00:02:28.976 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:28.976 CC test/nvme/cuse/cuse.o 00:02:28.976 CC test/nvme/err_injection/err_injection.o 00:02:28.976 CC test/nvme/simple_copy/simple_copy.o 00:02:28.976 CC test/nvme/sgl/sgl.o 00:02:28.976 CC test/nvme/overhead/overhead.o 00:02:28.976 CC test/blobfs/mkfs/mkfs.o 00:02:28.976 CC test/nvme/connect_stress/connect_stress.o 00:02:28.976 CC test/nvme/compliance/nvme_compliance.o 00:02:28.976 CC test/nvme/reserve/reserve.o 00:02:28.976 CC test/nvme/aer/aer.o 00:02:28.976 CC test/nvme/boot_partition/boot_partition.o 00:02:28.976 CC test/nvme/startup/startup.o 00:02:28.976 CC test/nvme/fused_ordering/fused_ordering.o 00:02:28.976 CC test/nvme/reset/reset.o 00:02:28.976 CC test/nvme/e2edp/nvme_dp.o 00:02:28.976 CC test/nvme/fdp/fdp.o 00:02:28.976 CC test/accel/dif/dif.o 00:02:28.976 LINK memory_ut 00:02:28.976 CC test/lvol/esnap/esnap.o 00:02:29.237 LINK err_injection 00:02:29.237 LINK boot_partition 00:02:29.237 LINK connect_stress 00:02:29.237 LINK doorbell_aers 00:02:29.237 LINK mkfs 00:02:29.237 LINK reserve 00:02:29.237 LINK startup 00:02:29.237 LINK fused_ordering 00:02:29.237 LINK simple_copy 00:02:29.237 LINK reset 00:02:29.237 LINK overhead 00:02:29.237 LINK sgl 00:02:29.237 LINK aer 00:02:29.237 LINK nvme_dp 00:02:29.237 CC examples/nvme/hello_world/hello_world.o 00:02:29.237 LINK nvme_compliance 00:02:29.237 LINK fdp 00:02:29.237 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:29.237 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:29.237 CC examples/nvme/reconnect/reconnect.o 00:02:29.237 CC examples/nvme/arbitration/arbitration.o 00:02:29.237 CC examples/nvme/hotplug/hotplug.o 00:02:29.237 LINK iscsi_fuzz 00:02:29.237 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:29.237 CC examples/nvme/abort/abort.o 00:02:29.237 LINK dif 00:02:29.498 CC examples/accel/perf/accel_perf.o 00:02:29.498 CC examples/blob/hello_world/hello_blob.o 00:02:29.498 CC examples/blob/cli/blobcli.o 00:02:29.498 LINK cmb_copy 00:02:29.498 LINK pmr_persistence 00:02:29.498 LINK hello_world 00:02:29.498 LINK hotplug 00:02:29.498 LINK reconnect 00:02:29.498 LINK arbitration 00:02:29.498 LINK abort 00:02:29.760 LINK hello_blob 00:02:29.760 LINK nvme_manage 00:02:29.760 LINK accel_perf 00:02:29.760 LINK blobcli 00:02:30.021 CC test/bdev/bdevio/bdevio.o 00:02:30.021 LINK cuse 00:02:30.282 LINK bdevio 00:02:30.282 CC examples/bdev/hello_world/hello_bdev.o 00:02:30.282 CC examples/bdev/bdevperf/bdevperf.o 00:02:30.543 LINK hello_bdev 00:02:31.116 LINK bdevperf 00:02:31.690 CC examples/nvmf/nvmf/nvmf.o 00:02:31.952 LINK nvmf 00:02:33.339 LINK esnap 00:02:33.600 00:02:33.600 real 0m51.484s 00:02:33.600 user 6m34.182s 00:02:33.600 sys 4m35.979s 00:02:33.600 13:32:59 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:33.600 13:32:59 make -- common/autotest_common.sh@10 -- $ set +x 00:02:33.600 ************************************ 00:02:33.600 END TEST make 00:02:33.600 ************************************ 00:02:33.600 13:32:59 -- common/autotest_common.sh@1142 -- $ return 0 00:02:33.600 13:32:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:33.600 13:32:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:33.600 13:32:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:33.600 13:32:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.600 13:32:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:33.600 13:32:59 -- pm/common@44 -- $ pid=746750 00:02:33.600 13:32:59 -- pm/common@50 -- $ kill -TERM 746750 00:02:33.600 13:32:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.600 13:32:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:33.600 13:32:59 -- pm/common@44 -- $ pid=746751 00:02:33.600 13:32:59 -- pm/common@50 -- $ kill -TERM 746751 00:02:33.600 13:32:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.600 13:32:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:33.600 13:32:59 -- pm/common@44 -- $ pid=746753 00:02:33.600 13:32:59 -- pm/common@50 -- $ kill -TERM 746753 00:02:33.600 13:32:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.600 13:32:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:33.600 13:32:59 -- pm/common@44 -- $ pid=746771 00:02:33.600 13:32:59 -- pm/common@50 -- $ sudo -E kill -TERM 746771 00:02:33.600 13:33:00 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:33.600 13:33:00 -- nvmf/common.sh@7 -- # uname -s 00:02:33.600 13:33:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:33.600 13:33:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:33.600 13:33:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:33.600 13:33:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:33.600 13:33:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:33.600 13:33:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:33.600 13:33:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:33.600 13:33:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:33.600 13:33:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:33.600 13:33:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:33.600 13:33:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:33.600 13:33:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:33.600 13:33:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:33.600 13:33:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:33.600 13:33:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:33.600 13:33:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:33.600 13:33:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:33.600 13:33:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:33.601 13:33:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:33.601 13:33:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:33.601 13:33:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.601 13:33:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.601 13:33:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.601 13:33:00 -- paths/export.sh@5 -- # export PATH 00:02:33.601 13:33:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:33.601 13:33:00 -- nvmf/common.sh@47 -- # : 0 00:02:33.601 13:33:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:33.601 13:33:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:33.601 13:33:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:33.601 13:33:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:33.601 13:33:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:33.601 13:33:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:33.601 13:33:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:33.601 13:33:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:33.601 13:33:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:33.601 13:33:00 -- spdk/autotest.sh@32 -- # uname -s 00:02:33.601 13:33:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:33.601 13:33:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:33.601 13:33:00 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:33.863 13:33:00 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:33.863 13:33:00 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:33.863 13:33:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:33.863 13:33:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:33.863 13:33:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:33.863 13:33:00 -- spdk/autotest.sh@48 -- # udevadm_pid=809879 00:02:33.863 13:33:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:33.863 13:33:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:33.863 13:33:00 -- pm/common@17 -- # local monitor 00:02:33.863 13:33:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.863 13:33:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.863 13:33:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.863 13:33:00 -- pm/common@21 -- # date +%s 00:02:33.863 13:33:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:33.863 13:33:00 -- pm/common@25 -- # sleep 1 00:02:33.863 13:33:00 -- pm/common@21 -- # date +%s 00:02:33.863 13:33:00 -- pm/common@21 -- # date +%s 00:02:33.863 13:33:00 -- pm/common@21 -- # date +%s 00:02:33.863 13:33:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721043180 00:02:33.863 13:33:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721043180 00:02:33.863 13:33:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721043180 00:02:33.863 13:33:00 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721043180 00:02:33.863 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721043180_collect-vmstat.pm.log 00:02:33.863 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721043180_collect-cpu-load.pm.log 00:02:33.863 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721043180_collect-cpu-temp.pm.log 00:02:33.863 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721043180_collect-bmc-pm.bmc.pm.log 00:02:34.836 13:33:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:34.836 13:33:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:34.836 13:33:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:34.836 13:33:01 -- common/autotest_common.sh@10 -- # set +x 00:02:34.836 13:33:01 -- spdk/autotest.sh@59 -- # create_test_list 00:02:34.836 13:33:01 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:34.836 13:33:01 -- common/autotest_common.sh@10 -- # set +x 00:02:34.836 13:33:01 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:34.836 13:33:01 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.836 13:33:01 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.836 13:33:01 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:34.836 13:33:01 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.836 13:33:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:34.836 13:33:01 -- common/autotest_common.sh@1455 -- # uname 00:02:34.836 13:33:01 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:34.836 13:33:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:34.836 13:33:01 -- common/autotest_common.sh@1475 -- # uname 00:02:34.836 13:33:01 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:34.836 13:33:01 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:34.836 13:33:01 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:34.836 13:33:01 -- spdk/autotest.sh@72 -- # hash lcov 00:02:34.836 13:33:01 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:34.836 13:33:01 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:34.836 --rc lcov_branch_coverage=1 00:02:34.836 --rc lcov_function_coverage=1 00:02:34.836 --rc genhtml_branch_coverage=1 00:02:34.836 --rc genhtml_function_coverage=1 00:02:34.836 --rc genhtml_legend=1 00:02:34.836 --rc geninfo_all_blocks=1 00:02:34.836 ' 00:02:34.836 13:33:01 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:34.836 --rc lcov_branch_coverage=1 00:02:34.836 --rc lcov_function_coverage=1 00:02:34.836 --rc genhtml_branch_coverage=1 00:02:34.836 --rc genhtml_function_coverage=1 00:02:34.836 --rc genhtml_legend=1 00:02:34.836 --rc geninfo_all_blocks=1 00:02:34.836 ' 00:02:34.836 13:33:01 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:34.836 --rc lcov_branch_coverage=1 00:02:34.836 --rc lcov_function_coverage=1 00:02:34.836 --rc genhtml_branch_coverage=1 00:02:34.836 --rc genhtml_function_coverage=1 00:02:34.836 --rc genhtml_legend=1 00:02:34.836 --rc geninfo_all_blocks=1 00:02:34.836 --no-external' 00:02:34.836 13:33:01 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:34.836 --rc lcov_branch_coverage=1 00:02:34.836 --rc lcov_function_coverage=1 00:02:34.836 --rc genhtml_branch_coverage=1 00:02:34.836 --rc genhtml_function_coverage=1 00:02:34.836 --rc genhtml_legend=1 00:02:34.836 --rc geninfo_all_blocks=1 00:02:34.836 --no-external' 00:02:34.836 13:33:01 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:34.836 lcov: LCOV version 1.14 00:02:34.836 13:33:01 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:49.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:49.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:01.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:01.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:01.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:01.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:01.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:01.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:01.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:01.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:01.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:01.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:01.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:01.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:01.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:01.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:01.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:01.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:01.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:01.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:01.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:01.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:01.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:01.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:01.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:01.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:01.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:01.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:01.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:01.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:01.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:01.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:01.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:01.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:01.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:01.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:01.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:01.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:01.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:01.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:01.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:01.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:01.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:01.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:01.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:01.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:01.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:01.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:01.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:01.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:01.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:01.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:01.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:01.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:01.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:01.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:01.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:01.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:01.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:01.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:01.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:01.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:01.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:01.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:01.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:01.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:01.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:01.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:03.897 13:33:30 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:03.897 13:33:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:03.897 13:33:30 -- common/autotest_common.sh@10 -- # set +x 00:03:03.897 13:33:30 -- spdk/autotest.sh@91 -- # rm -f 00:03:03.897 13:33:30 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.216 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:07.216 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:07.217 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:07.217 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:07.217 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:07.477 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:07.477 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:07.477 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:07.477 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:07.477 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:07.477 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:07.477 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:07.477 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:07.477 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:07.477 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:07.477 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:07.738 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:08.000 13:33:34 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:08.000 13:33:34 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:08.000 13:33:34 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:08.000 13:33:34 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:08.000 13:33:34 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:08.000 13:33:34 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:08.000 13:33:34 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:08.000 13:33:34 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:08.000 13:33:34 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:08.000 13:33:34 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:08.000 13:33:34 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:08.000 13:33:34 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:08.000 13:33:34 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:08.000 13:33:34 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:08.000 13:33:34 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:08.000 No valid GPT data, bailing 00:03:08.000 13:33:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:08.000 13:33:34 -- scripts/common.sh@391 -- # pt= 00:03:08.000 13:33:34 -- scripts/common.sh@392 -- # return 1 00:03:08.000 13:33:34 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:08.000 1+0 records in 00:03:08.000 1+0 records out 00:03:08.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00178098 s, 589 MB/s 00:03:08.000 13:33:34 -- spdk/autotest.sh@118 -- # sync 00:03:08.000 13:33:34 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:08.000 13:33:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:08.000 13:33:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:16.140 13:33:42 -- spdk/autotest.sh@124 -- # uname -s 00:03:16.140 13:33:42 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:16.140 13:33:42 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:16.140 13:33:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:16.140 13:33:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.140 13:33:42 -- common/autotest_common.sh@10 -- # set +x 00:03:16.140 ************************************ 00:03:16.140 START TEST setup.sh 00:03:16.140 ************************************ 00:03:16.140 13:33:42 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:16.140 * Looking for test storage... 00:03:16.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:16.140 13:33:42 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:16.140 13:33:42 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:16.140 13:33:42 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:16.140 13:33:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:16.140 13:33:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.140 13:33:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:16.140 ************************************ 00:03:16.140 START TEST acl 00:03:16.140 ************************************ 00:03:16.140 13:33:42 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:16.140 * Looking for test storage... 00:03:16.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:16.140 13:33:42 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:16.140 13:33:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:16.140 13:33:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:16.140 13:33:42 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:16.140 13:33:42 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:16.140 13:33:42 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:16.140 13:33:42 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:16.140 13:33:42 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:16.140 13:33:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:16.140 13:33:42 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:16.140 13:33:42 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:16.140 13:33:42 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:16.140 13:33:42 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:16.140 13:33:42 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:16.140 13:33:42 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.140 13:33:42 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:19.510 13:33:45 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:19.510 13:33:45 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:19.510 13:33:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.510 13:33:45 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:19.510 13:33:45 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.510 13:33:45 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:22.806 Hugepages 00:03:22.806 node hugesize free / total 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.806 00:03:22.806 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.806 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.807 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:23.067 13:33:49 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:23.067 13:33:49 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:23.067 13:33:49 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.067 13:33:49 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:23.067 ************************************ 00:03:23.067 START TEST denied 00:03:23.067 ************************************ 00:03:23.067 13:33:49 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:23.067 13:33:49 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:23.067 13:33:49 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:23.067 13:33:49 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:23.067 13:33:49 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.067 13:33:49 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:27.271 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:27.271 13:33:53 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:27.271 13:33:53 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:27.271 13:33:53 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:27.272 13:33:53 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:27.272 13:33:53 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:27.272 13:33:53 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:27.272 13:33:53 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:27.272 13:33:53 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:27.272 13:33:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.272 13:33:53 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.477 00:03:31.477 real 0m8.182s 00:03:31.477 user 0m2.603s 00:03:31.477 sys 0m4.817s 00:03:31.477 13:33:57 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.477 13:33:57 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:31.477 ************************************ 00:03:31.477 END TEST denied 00:03:31.477 ************************************ 00:03:31.477 13:33:57 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:31.477 13:33:57 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:31.477 13:33:57 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.477 13:33:57 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.477 13:33:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:31.477 ************************************ 00:03:31.477 START TEST allowed 00:03:31.477 ************************************ 00:03:31.477 13:33:57 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:31.477 13:33:57 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:31.477 13:33:57 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:31.477 13:33:57 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:31.477 13:33:57 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.477 13:33:57 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:36.763 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:36.763 13:34:02 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:36.763 13:34:02 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:36.763 13:34:02 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:36.763 13:34:02 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.763 13:34:02 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.965 00:03:40.965 real 0m8.998s 00:03:40.965 user 0m2.529s 00:03:40.965 sys 0m4.675s 00:03:40.965 13:34:06 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.965 13:34:06 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:40.965 ************************************ 00:03:40.965 END TEST allowed 00:03:40.965 ************************************ 00:03:40.965 13:34:06 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:40.965 00:03:40.965 real 0m24.476s 00:03:40.965 user 0m7.751s 00:03:40.965 sys 0m14.277s 00:03:40.965 13:34:06 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.965 13:34:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:40.965 ************************************ 00:03:40.965 END TEST acl 00:03:40.965 ************************************ 00:03:40.965 13:34:06 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:40.965 13:34:06 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:40.965 13:34:06 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.965 13:34:06 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.965 13:34:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:40.965 ************************************ 00:03:40.965 START TEST hugepages 00:03:40.965 ************************************ 00:03:40.965 13:34:06 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:40.965 * Looking for test storage... 00:03:40.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:40.965 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:40.965 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:40.965 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 102778692 kB' 'MemAvailable: 106262920 kB' 'Buffers: 2704 kB' 'Cached: 14564524 kB' 'SwapCached: 0 kB' 'Active: 11606516 kB' 'Inactive: 3523448 kB' 'Active(anon): 11132332 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566072 kB' 'Mapped: 206884 kB' 'Shmem: 10569596 kB' 'KReclaimable: 527540 kB' 'Slab: 1392520 kB' 'SReclaimable: 527540 kB' 'SUnreclaim: 864980 kB' 'KernelStack: 27328 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460892 kB' 'Committed_AS: 12713564 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235460 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.966 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.967 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.967 13:34:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.967 13:34:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.967 13:34:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.967 13:34:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.967 13:34:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.967 13:34:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:40.967 13:34:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:40.967 13:34:07 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:40.968 13:34:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.968 13:34:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.968 13:34:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:40.968 ************************************ 00:03:40.968 START TEST default_setup 00:03:40.968 ************************************ 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.968 13:34:07 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:44.312 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:44.312 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:44.312 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:44.312 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:44.312 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:44.312 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:44.312 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:44.312 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:44.312 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:44.312 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:44.312 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:44.312 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:44.312 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:44.312 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:44.312 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:44.312 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:44.312 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.580 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104940352 kB' 'MemAvailable: 108424556 kB' 'Buffers: 2704 kB' 'Cached: 14564644 kB' 'SwapCached: 0 kB' 'Active: 11624388 kB' 'Inactive: 3523448 kB' 'Active(anon): 11150204 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583488 kB' 'Mapped: 207184 kB' 'Shmem: 10569716 kB' 'KReclaimable: 527516 kB' 'Slab: 1390872 kB' 'SReclaimable: 527516 kB' 'SUnreclaim: 863356 kB' 'KernelStack: 27360 kB' 'PageTables: 8996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12731404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.581 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104940484 kB' 'MemAvailable: 108424656 kB' 'Buffers: 2704 kB' 'Cached: 14564648 kB' 'SwapCached: 0 kB' 'Active: 11623596 kB' 'Inactive: 3523448 kB' 'Active(anon): 11149412 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583140 kB' 'Mapped: 207076 kB' 'Shmem: 10569720 kB' 'KReclaimable: 527484 kB' 'Slab: 1390856 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 863372 kB' 'KernelStack: 27328 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12731424 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.582 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.583 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104939400 kB' 'MemAvailable: 108423572 kB' 'Buffers: 2704 kB' 'Cached: 14564664 kB' 'SwapCached: 0 kB' 'Active: 11624340 kB' 'Inactive: 3523448 kB' 'Active(anon): 11150156 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583848 kB' 'Mapped: 207076 kB' 'Shmem: 10569736 kB' 'KReclaimable: 527484 kB' 'Slab: 1390836 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 863352 kB' 'KernelStack: 27328 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12732448 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235300 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.584 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.585 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.586 nr_hugepages=1024 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.586 resv_hugepages=0 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.586 surplus_hugepages=0 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.586 anon_hugepages=0 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104939724 kB' 'MemAvailable: 108423896 kB' 'Buffers: 2704 kB' 'Cached: 14564688 kB' 'SwapCached: 0 kB' 'Active: 11623796 kB' 'Inactive: 3523448 kB' 'Active(anon): 11149612 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583244 kB' 'Mapped: 207076 kB' 'Shmem: 10569760 kB' 'KReclaimable: 527484 kB' 'Slab: 1390836 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 863352 kB' 'KernelStack: 27248 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12734080 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235380 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.586 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.587 13:34:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52506576 kB' 'MemUsed: 13152432 kB' 'SwapCached: 0 kB' 'Active: 4926876 kB' 'Inactive: 3299996 kB' 'Active(anon): 4774328 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7950552 kB' 'Mapped: 93160 kB' 'AnonPages: 279524 kB' 'Shmem: 4498008 kB' 'KernelStack: 14936 kB' 'PageTables: 4884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394860 kB' 'Slab: 907000 kB' 'SReclaimable: 394860 kB' 'SUnreclaim: 512140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.588 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.589 node0=1024 expecting 1024 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.589 00:03:44.589 real 0m4.014s 00:03:44.589 user 0m1.574s 00:03:44.589 sys 0m2.467s 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.589 13:34:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:44.589 ************************************ 00:03:44.589 END TEST default_setup 00:03:44.589 ************************************ 00:03:44.589 13:34:11 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:44.589 13:34:11 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:44.589 13:34:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.589 13:34:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.589 13:34:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.851 ************************************ 00:03:44.851 START TEST per_node_1G_alloc 00:03:44.851 ************************************ 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.851 13:34:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:48.154 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:48.154 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:48.154 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:48.154 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:48.154 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:48.154 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:48.154 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:48.154 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:48.154 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:48.154 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:48.154 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:48.154 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:48.154 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:48.154 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:48.154 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:48.154 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:48.154 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:48.421 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:48.421 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:48.421 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.421 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.421 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.421 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.421 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.421 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.421 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.421 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.421 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.421 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104959576 kB' 'MemAvailable: 108443748 kB' 'Buffers: 2704 kB' 'Cached: 14564816 kB' 'SwapCached: 0 kB' 'Active: 11622248 kB' 'Inactive: 3523448 kB' 'Active(anon): 11148064 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581412 kB' 'Mapped: 206160 kB' 'Shmem: 10569888 kB' 'KReclaimable: 527484 kB' 'Slab: 1391144 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 863660 kB' 'KernelStack: 27488 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12723220 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235604 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.422 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104957992 kB' 'MemAvailable: 108442164 kB' 'Buffers: 2704 kB' 'Cached: 14564820 kB' 'SwapCached: 0 kB' 'Active: 11622456 kB' 'Inactive: 3523448 kB' 'Active(anon): 11148272 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581680 kB' 'Mapped: 206112 kB' 'Shmem: 10569892 kB' 'KReclaimable: 527484 kB' 'Slab: 1391140 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 863656 kB' 'KernelStack: 27568 kB' 'PageTables: 9228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12721632 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235620 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.423 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.424 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104955224 kB' 'MemAvailable: 108439396 kB' 'Buffers: 2704 kB' 'Cached: 14564836 kB' 'SwapCached: 0 kB' 'Active: 11622776 kB' 'Inactive: 3523448 kB' 'Active(anon): 11148592 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581468 kB' 'Mapped: 206112 kB' 'Shmem: 10569908 kB' 'KReclaimable: 527484 kB' 'Slab: 1391140 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 863656 kB' 'KernelStack: 27536 kB' 'PageTables: 8928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12723264 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235604 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.425 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.426 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.427 nr_hugepages=1024 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.427 resv_hugepages=0 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.427 surplus_hugepages=0 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.427 anon_hugepages=0 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104955808 kB' 'MemAvailable: 108439980 kB' 'Buffers: 2704 kB' 'Cached: 14564836 kB' 'SwapCached: 0 kB' 'Active: 11622264 kB' 'Inactive: 3523448 kB' 'Active(anon): 11148080 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581456 kB' 'Mapped: 206112 kB' 'Shmem: 10569908 kB' 'KReclaimable: 527484 kB' 'Slab: 1391140 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 863656 kB' 'KernelStack: 27456 kB' 'PageTables: 8860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12723284 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235620 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.427 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.428 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53554324 kB' 'MemUsed: 12104684 kB' 'SwapCached: 0 kB' 'Active: 4926816 kB' 'Inactive: 3299996 kB' 'Active(anon): 4774268 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7950616 kB' 'Mapped: 92684 kB' 'AnonPages: 279360 kB' 'Shmem: 4498072 kB' 'KernelStack: 15224 kB' 'PageTables: 5340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394860 kB' 'Slab: 907392 kB' 'SReclaimable: 394860 kB' 'SUnreclaim: 512532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.429 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51401040 kB' 'MemUsed: 9278832 kB' 'SwapCached: 0 kB' 'Active: 6695792 kB' 'Inactive: 223452 kB' 'Active(anon): 6374156 kB' 'Inactive(anon): 0 kB' 'Active(file): 321636 kB' 'Inactive(file): 223452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6616988 kB' 'Mapped: 113428 kB' 'AnonPages: 302364 kB' 'Shmem: 6071900 kB' 'KernelStack: 12328 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132624 kB' 'Slab: 483748 kB' 'SReclaimable: 132624 kB' 'SUnreclaim: 351124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.430 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.431 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:48.693 node0=512 expecting 512 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:48.693 node1=512 expecting 512 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:48.693 00:03:48.693 real 0m3.821s 00:03:48.693 user 0m1.539s 00:03:48.693 sys 0m2.335s 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.693 13:34:14 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.693 ************************************ 00:03:48.693 END TEST per_node_1G_alloc 00:03:48.693 ************************************ 00:03:48.693 13:34:14 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:48.693 13:34:14 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:48.693 13:34:14 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.693 13:34:14 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.693 13:34:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.693 ************************************ 00:03:48.693 START TEST even_2G_alloc 00:03:48.693 ************************************ 00:03:48.693 13:34:15 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:48.693 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:48.693 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.693 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.693 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.693 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.693 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.693 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.693 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.693 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.693 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.694 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.694 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.694 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.694 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.694 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.694 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:48.694 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:48.694 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:48.694 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.694 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:48.694 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:48.694 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:48.694 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.694 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:48.694 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:48.694 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:48.694 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.694 13:34:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:51.998 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.998 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.998 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.998 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.998 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.998 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.998 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.998 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.998 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.998 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:51.998 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.998 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.999 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.999 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.999 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.999 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.999 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:52.265 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104978080 kB' 'MemAvailable: 108462252 kB' 'Buffers: 2704 kB' 'Cached: 14565000 kB' 'SwapCached: 0 kB' 'Active: 11623528 kB' 'Inactive: 3523448 kB' 'Active(anon): 11149344 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581992 kB' 'Mapped: 206204 kB' 'Shmem: 10570072 kB' 'KReclaimable: 527484 kB' 'Slab: 1390708 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 863224 kB' 'KernelStack: 27312 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12721200 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.266 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104978800 kB' 'MemAvailable: 108462972 kB' 'Buffers: 2704 kB' 'Cached: 14565004 kB' 'SwapCached: 0 kB' 'Active: 11622676 kB' 'Inactive: 3523448 kB' 'Active(anon): 11148492 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581672 kB' 'Mapped: 206124 kB' 'Shmem: 10570076 kB' 'KReclaimable: 527484 kB' 'Slab: 1390716 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 863232 kB' 'KernelStack: 27296 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12721216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.267 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.268 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104979136 kB' 'MemAvailable: 108463308 kB' 'Buffers: 2704 kB' 'Cached: 14565020 kB' 'SwapCached: 0 kB' 'Active: 11622704 kB' 'Inactive: 3523448 kB' 'Active(anon): 11148520 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581672 kB' 'Mapped: 206124 kB' 'Shmem: 10570092 kB' 'KReclaimable: 527484 kB' 'Slab: 1390716 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 863232 kB' 'KernelStack: 27296 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12721352 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.269 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.270 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.271 nr_hugepages=1024 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.271 resv_hugepages=0 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.271 surplus_hugepages=0 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.271 anon_hugepages=0 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104979456 kB' 'MemAvailable: 108463628 kB' 'Buffers: 2704 kB' 'Cached: 14565044 kB' 'SwapCached: 0 kB' 'Active: 11622572 kB' 'Inactive: 3523448 kB' 'Active(anon): 11148388 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581588 kB' 'Mapped: 206124 kB' 'Shmem: 10570116 kB' 'KReclaimable: 527484 kB' 'Slab: 1390716 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 863232 kB' 'KernelStack: 27264 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12721260 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.271 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.272 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.535 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53581688 kB' 'MemUsed: 12077320 kB' 'SwapCached: 0 kB' 'Active: 4924676 kB' 'Inactive: 3299996 kB' 'Active(anon): 4772128 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7950640 kB' 'Mapped: 92684 kB' 'AnonPages: 277188 kB' 'Shmem: 4498096 kB' 'KernelStack: 14888 kB' 'PageTables: 4680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394860 kB' 'Slab: 907008 kB' 'SReclaimable: 394860 kB' 'SUnreclaim: 512148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.536 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.537 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51397544 kB' 'MemUsed: 9282328 kB' 'SwapCached: 0 kB' 'Active: 6697932 kB' 'Inactive: 223452 kB' 'Active(anon): 6376296 kB' 'Inactive(anon): 0 kB' 'Active(file): 321636 kB' 'Inactive(file): 223452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6617152 kB' 'Mapped: 113440 kB' 'AnonPages: 304328 kB' 'Shmem: 6072064 kB' 'KernelStack: 12392 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132624 kB' 'Slab: 483708 kB' 'SReclaimable: 132624 kB' 'SUnreclaim: 351084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.538 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.539 node0=512 expecting 512 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:52.539 node1=512 expecting 512 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:52.539 00:03:52.539 real 0m3.833s 00:03:52.539 user 0m1.509s 00:03:52.539 sys 0m2.389s 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.539 13:34:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.539 ************************************ 00:03:52.539 END TEST even_2G_alloc 00:03:52.539 ************************************ 00:03:52.539 13:34:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:52.539 13:34:18 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:52.539 13:34:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.539 13:34:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.539 13:34:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.539 ************************************ 00:03:52.539 START TEST odd_alloc 00:03:52.539 ************************************ 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.539 13:34:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.970 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.970 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.970 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.970 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.970 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.970 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.970 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.970 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.970 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.970 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:55.970 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.970 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.970 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.970 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.970 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.970 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.970 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104983844 kB' 'MemAvailable: 108468016 kB' 'Buffers: 2704 kB' 'Cached: 14565180 kB' 'SwapCached: 0 kB' 'Active: 11625380 kB' 'Inactive: 3523448 kB' 'Active(anon): 11151196 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583828 kB' 'Mapped: 207128 kB' 'Shmem: 10570252 kB' 'KReclaimable: 527484 kB' 'Slab: 1390284 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 862800 kB' 'KernelStack: 27408 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12756440 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235668 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.239 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104984164 kB' 'MemAvailable: 108468336 kB' 'Buffers: 2704 kB' 'Cached: 14565184 kB' 'SwapCached: 0 kB' 'Active: 11624832 kB' 'Inactive: 3523448 kB' 'Active(anon): 11150648 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583784 kB' 'Mapped: 206992 kB' 'Shmem: 10570256 kB' 'KReclaimable: 527484 kB' 'Slab: 1390268 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 862784 kB' 'KernelStack: 27376 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12756460 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235652 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.240 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.241 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104984416 kB' 'MemAvailable: 108468588 kB' 'Buffers: 2704 kB' 'Cached: 14565200 kB' 'SwapCached: 0 kB' 'Active: 11624704 kB' 'Inactive: 3523448 kB' 'Active(anon): 11150520 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583608 kB' 'Mapped: 206992 kB' 'Shmem: 10570272 kB' 'KReclaimable: 527484 kB' 'Slab: 1390268 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 862784 kB' 'KernelStack: 27360 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12756480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235652 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.242 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.243 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:56.244 nr_hugepages=1025 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.244 resv_hugepages=0 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.244 surplus_hugepages=0 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.244 anon_hugepages=0 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104985128 kB' 'MemAvailable: 108469300 kB' 'Buffers: 2704 kB' 'Cached: 14565220 kB' 'SwapCached: 0 kB' 'Active: 11624868 kB' 'Inactive: 3523448 kB' 'Active(anon): 11150684 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583784 kB' 'Mapped: 206992 kB' 'Shmem: 10570292 kB' 'KReclaimable: 527484 kB' 'Slab: 1390268 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 862784 kB' 'KernelStack: 27376 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12756500 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235652 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.244 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.245 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53590568 kB' 'MemUsed: 12068440 kB' 'SwapCached: 0 kB' 'Active: 4927924 kB' 'Inactive: 3299996 kB' 'Active(anon): 4775376 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7950676 kB' 'Mapped: 93524 kB' 'AnonPages: 280516 kB' 'Shmem: 4498132 kB' 'KernelStack: 14920 kB' 'PageTables: 4784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394860 kB' 'Slab: 906548 kB' 'SReclaimable: 394860 kB' 'SUnreclaim: 511688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.246 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51395496 kB' 'MemUsed: 9284376 kB' 'SwapCached: 0 kB' 'Active: 6697000 kB' 'Inactive: 223452 kB' 'Active(anon): 6375364 kB' 'Inactive(anon): 0 kB' 'Active(file): 321636 kB' 'Inactive(file): 223452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6617292 kB' 'Mapped: 113468 kB' 'AnonPages: 303256 kB' 'Shmem: 6072204 kB' 'KernelStack: 12456 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132624 kB' 'Slab: 483720 kB' 'SReclaimable: 132624 kB' 'SUnreclaim: 351096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.247 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:56.248 node0=512 expecting 513 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:56.248 node1=513 expecting 512 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:56.248 00:03:56.248 real 0m3.785s 00:03:56.248 user 0m1.526s 00:03:56.248 sys 0m2.314s 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.248 13:34:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:56.248 ************************************ 00:03:56.248 END TEST odd_alloc 00:03:56.248 ************************************ 00:03:56.248 13:34:22 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:56.248 13:34:22 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:56.248 13:34:22 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.248 13:34:22 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.248 13:34:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.509 ************************************ 00:03:56.509 START TEST custom_alloc 00:03:56.509 ************************************ 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:56.509 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:56.510 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.510 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.510 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:56.510 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.510 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.510 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.510 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.510 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:56.510 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:56.510 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:56.510 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:56.510 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:56.510 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:56.510 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:56.510 13:34:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:56.510 13:34:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.510 13:34:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.820 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.820 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.820 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.820 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.820 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.820 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.820 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.820 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.820 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.820 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:59.820 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.820 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.820 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.820 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.820 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.820 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.820 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.820 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:59.820 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:59.820 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.820 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103963400 kB' 'MemAvailable: 107447572 kB' 'Buffers: 2704 kB' 'Cached: 14565356 kB' 'SwapCached: 0 kB' 'Active: 11626680 kB' 'Inactive: 3523448 kB' 'Active(anon): 11152496 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584828 kB' 'Mapped: 207108 kB' 'Shmem: 10570428 kB' 'KReclaimable: 527484 kB' 'Slab: 1389128 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 861644 kB' 'KernelStack: 27392 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12757264 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.821 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.822 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103964328 kB' 'MemAvailable: 107448500 kB' 'Buffers: 2704 kB' 'Cached: 14565360 kB' 'SwapCached: 0 kB' 'Active: 11625664 kB' 'Inactive: 3523448 kB' 'Active(anon): 11151480 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584760 kB' 'Mapped: 207016 kB' 'Shmem: 10570432 kB' 'KReclaimable: 527484 kB' 'Slab: 1389100 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 861616 kB' 'KernelStack: 27360 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12758376 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235620 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.823 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.824 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103962312 kB' 'MemAvailable: 107446484 kB' 'Buffers: 2704 kB' 'Cached: 14565360 kB' 'SwapCached: 0 kB' 'Active: 11628028 kB' 'Inactive: 3523448 kB' 'Active(anon): 11153844 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586620 kB' 'Mapped: 207520 kB' 'Shmem: 10570432 kB' 'KReclaimable: 527484 kB' 'Slab: 1389100 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 861616 kB' 'KernelStack: 27328 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12759980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235604 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.825 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.826 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.827 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:59.828 nr_hugepages=1536 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.828 resv_hugepages=0 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.828 surplus_hugepages=0 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.828 anon_hugepages=0 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103956444 kB' 'MemAvailable: 107440616 kB' 'Buffers: 2704 kB' 'Cached: 14565400 kB' 'SwapCached: 0 kB' 'Active: 11631292 kB' 'Inactive: 3523448 kB' 'Active(anon): 11157108 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589928 kB' 'Mapped: 207920 kB' 'Shmem: 10570472 kB' 'KReclaimable: 527484 kB' 'Slab: 1389100 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 861616 kB' 'KernelStack: 27376 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12763444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235624 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.828 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.829 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53598560 kB' 'MemUsed: 12060448 kB' 'SwapCached: 0 kB' 'Active: 4928188 kB' 'Inactive: 3299996 kB' 'Active(anon): 4775640 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7950700 kB' 'Mapped: 93524 kB' 'AnonPages: 280672 kB' 'Shmem: 4498156 kB' 'KernelStack: 14920 kB' 'PageTables: 4728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394860 kB' 'Slab: 905292 kB' 'SReclaimable: 394860 kB' 'SUnreclaim: 510432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.830 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.831 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 50357812 kB' 'MemUsed: 10322060 kB' 'SwapCached: 0 kB' 'Active: 6697548 kB' 'Inactive: 223452 kB' 'Active(anon): 6375912 kB' 'Inactive(anon): 0 kB' 'Active(file): 321636 kB' 'Inactive(file): 223452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6617440 kB' 'Mapped: 113492 kB' 'AnonPages: 303648 kB' 'Shmem: 6072352 kB' 'KernelStack: 12440 kB' 'PageTables: 3932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132624 kB' 'Slab: 483800 kB' 'SReclaimable: 132624 kB' 'SUnreclaim: 351176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.832 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.833 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.834 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.834 13:34:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.834 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.834 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.834 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.834 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.834 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:59.834 node0=512 expecting 512 00:03:59.834 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.834 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.834 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.834 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:59.834 node1=1024 expecting 1024 00:03:59.834 13:34:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:59.834 00:03:59.834 real 0m3.437s 00:03:59.834 user 0m1.289s 00:03:59.834 sys 0m2.159s 00:03:59.834 13:34:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.834 13:34:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.834 ************************************ 00:03:59.834 END TEST custom_alloc 00:03:59.834 ************************************ 00:03:59.834 13:34:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:59.834 13:34:26 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:59.834 13:34:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.834 13:34:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.834 13:34:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.834 ************************************ 00:03:59.834 START TEST no_shrink_alloc 00:03:59.834 ************************************ 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.834 13:34:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.136 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:03.136 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:03.136 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:03.136 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:03.136 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:03.136 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:03.136 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:03.136 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:03.136 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:03.136 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:03.136 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:03.136 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:03.136 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:03.136 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:03.136 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:03.136 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:03.136 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104992016 kB' 'MemAvailable: 108476188 kB' 'Buffers: 2704 kB' 'Cached: 14565532 kB' 'SwapCached: 0 kB' 'Active: 11627520 kB' 'Inactive: 3523448 kB' 'Active(anon): 11153336 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585608 kB' 'Mapped: 207120 kB' 'Shmem: 10570604 kB' 'KReclaimable: 527484 kB' 'Slab: 1389104 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 861620 kB' 'KernelStack: 27456 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12761500 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235828 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.397 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104993228 kB' 'MemAvailable: 108477400 kB' 'Buffers: 2704 kB' 'Cached: 14565536 kB' 'SwapCached: 0 kB' 'Active: 11627596 kB' 'Inactive: 3523448 kB' 'Active(anon): 11153412 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586080 kB' 'Mapped: 207040 kB' 'Shmem: 10570608 kB' 'KReclaimable: 527484 kB' 'Slab: 1389296 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 861812 kB' 'KernelStack: 27552 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12761520 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235812 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.662 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104991184 kB' 'MemAvailable: 108475356 kB' 'Buffers: 2704 kB' 'Cached: 14565552 kB' 'SwapCached: 0 kB' 'Active: 11627332 kB' 'Inactive: 3523448 kB' 'Active(anon): 11153148 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585832 kB' 'Mapped: 206980 kB' 'Shmem: 10570624 kB' 'KReclaimable: 527484 kB' 'Slab: 1389296 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 861812 kB' 'KernelStack: 27536 kB' 'PageTables: 9236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12761540 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235796 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.663 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.664 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.665 nr_hugepages=1024 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.665 resv_hugepages=0 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.665 surplus_hugepages=0 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.665 anon_hugepages=0 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104993632 kB' 'MemAvailable: 108477804 kB' 'Buffers: 2704 kB' 'Cached: 14565576 kB' 'SwapCached: 0 kB' 'Active: 11625936 kB' 'Inactive: 3523448 kB' 'Active(anon): 11151752 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584492 kB' 'Mapped: 206192 kB' 'Shmem: 10570648 kB' 'KReclaimable: 527484 kB' 'Slab: 1389296 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 861812 kB' 'KernelStack: 27280 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12725432 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235668 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.665 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52547504 kB' 'MemUsed: 13111504 kB' 'SwapCached: 0 kB' 'Active: 4928016 kB' 'Inactive: 3299996 kB' 'Active(anon): 4775468 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7950756 kB' 'Mapped: 92684 kB' 'AnonPages: 280492 kB' 'Shmem: 4498212 kB' 'KernelStack: 14760 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394860 kB' 'Slab: 905184 kB' 'SReclaimable: 394860 kB' 'SUnreclaim: 510324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.666 node0=1024 expecting 1024 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.666 13:34:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.967 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.967 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.967 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.967 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.967 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.967 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.967 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.967 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:06.967 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.967 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:06.967 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.967 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.967 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.967 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.967 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.967 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.967 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:07.229 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104979016 kB' 'MemAvailable: 108463188 kB' 'Buffers: 2704 kB' 'Cached: 14565684 kB' 'SwapCached: 0 kB' 'Active: 11630792 kB' 'Inactive: 3523448 kB' 'Active(anon): 11156608 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589128 kB' 'Mapped: 206656 kB' 'Shmem: 10570756 kB' 'KReclaimable: 527484 kB' 'Slab: 1389960 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 862476 kB' 'KernelStack: 27376 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12731760 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.229 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.230 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.496 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:07.496 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.496 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.496 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.496 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.496 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104974328 kB' 'MemAvailable: 108458500 kB' 'Buffers: 2704 kB' 'Cached: 14565684 kB' 'SwapCached: 0 kB' 'Active: 11633420 kB' 'Inactive: 3523448 kB' 'Active(anon): 11159236 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 591812 kB' 'Mapped: 206640 kB' 'Shmem: 10570756 kB' 'KReclaimable: 527484 kB' 'Slab: 1389960 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 862476 kB' 'KernelStack: 27424 kB' 'PageTables: 9108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12734168 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235560 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.497 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104976528 kB' 'MemAvailable: 108460700 kB' 'Buffers: 2704 kB' 'Cached: 14565704 kB' 'SwapCached: 0 kB' 'Active: 11627204 kB' 'Inactive: 3523448 kB' 'Active(anon): 11153020 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585628 kB' 'Mapped: 206200 kB' 'Shmem: 10570776 kB' 'KReclaimable: 527484 kB' 'Slab: 1390040 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 862556 kB' 'KernelStack: 27184 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12727700 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:04:07.498 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.499 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.500 nr_hugepages=1024 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.500 resv_hugepages=0 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.500 surplus_hugepages=0 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.500 anon_hugepages=0 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104977596 kB' 'MemAvailable: 108461768 kB' 'Buffers: 2704 kB' 'Cached: 14565724 kB' 'SwapCached: 0 kB' 'Active: 11626968 kB' 'Inactive: 3523448 kB' 'Active(anon): 11152784 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585216 kB' 'Mapped: 206200 kB' 'Shmem: 10570796 kB' 'KReclaimable: 527484 kB' 'Slab: 1389696 kB' 'SReclaimable: 527484 kB' 'SUnreclaim: 862212 kB' 'KernelStack: 27280 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12724628 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 137088 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4355444 kB' 'DirectMap2M: 28878848 kB' 'DirectMap1G: 102760448 kB' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.500 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.501 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.502 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52533480 kB' 'MemUsed: 13125528 kB' 'SwapCached: 0 kB' 'Active: 4928944 kB' 'Inactive: 3299996 kB' 'Active(anon): 4776396 kB' 'Inactive(anon): 0 kB' 'Active(file): 152548 kB' 'Inactive(file): 3299996 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7950804 kB' 'Mapped: 92684 kB' 'AnonPages: 281316 kB' 'Shmem: 4498260 kB' 'KernelStack: 14872 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 394860 kB' 'Slab: 905028 kB' 'SReclaimable: 394860 kB' 'SUnreclaim: 510168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.503 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:07.504 node0=1024 expecting 1024 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.504 00:04:07.504 real 0m7.590s 00:04:07.504 user 0m3.089s 00:04:07.504 sys 0m4.618s 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.504 13:34:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:07.504 ************************************ 00:04:07.504 END TEST no_shrink_alloc 00:04:07.504 ************************************ 00:04:07.504 13:34:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:07.504 13:34:33 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:07.504 13:34:33 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:07.504 13:34:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.504 13:34:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.504 13:34:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.504 13:34:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.504 13:34:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.504 13:34:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.504 13:34:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.504 13:34:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.504 13:34:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.504 13:34:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.504 13:34:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:07.504 13:34:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:07.504 00:04:07.504 real 0m27.083s 00:04:07.504 user 0m10.771s 00:04:07.504 sys 0m16.677s 00:04:07.504 13:34:33 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.504 13:34:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.504 ************************************ 00:04:07.504 END TEST hugepages 00:04:07.504 ************************************ 00:04:07.504 13:34:33 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:07.504 13:34:33 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:07.504 13:34:33 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.504 13:34:33 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.504 13:34:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.765 ************************************ 00:04:07.765 START TEST driver 00:04:07.765 ************************************ 00:04:07.765 13:34:34 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:07.765 * Looking for test storage... 00:04:07.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:07.765 13:34:34 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:07.765 13:34:34 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.765 13:34:34 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.068 13:34:38 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:13.068 13:34:38 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.069 13:34:38 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.069 13:34:38 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.069 ************************************ 00:04:13.069 START TEST guess_driver 00:04:13.069 ************************************ 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:13.069 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:13.069 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:13.069 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:13.069 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:13.069 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:13.069 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:13.069 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:13.069 Looking for driver=vfio-pci 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.069 13:34:38 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.371 13:34:42 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:21.674 00:04:21.674 real 0m8.459s 00:04:21.674 user 0m2.727s 00:04:21.674 sys 0m4.949s 00:04:21.674 13:34:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.674 13:34:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:21.674 ************************************ 00:04:21.674 END TEST guess_driver 00:04:21.674 ************************************ 00:04:21.674 13:34:47 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:21.674 00:04:21.674 real 0m13.345s 00:04:21.674 user 0m4.084s 00:04:21.674 sys 0m7.664s 00:04:21.674 13:34:47 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.674 13:34:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:21.674 ************************************ 00:04:21.674 END TEST driver 00:04:21.674 ************************************ 00:04:21.674 13:34:47 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:21.674 13:34:47 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:21.674 13:34:47 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.674 13:34:47 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.674 13:34:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:21.674 ************************************ 00:04:21.674 START TEST devices 00:04:21.674 ************************************ 00:04:21.674 13:34:47 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:21.674 * Looking for test storage... 00:04:21.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:21.674 13:34:47 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:21.674 13:34:47 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:21.674 13:34:47 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.674 13:34:47 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.979 13:34:51 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:24.979 13:34:51 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:24.979 13:34:51 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:24.979 13:34:51 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:24.979 13:34:51 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:24.979 13:34:51 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:24.979 13:34:51 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:24.979 13:34:51 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:24.979 13:34:51 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:24.979 13:34:51 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:24.979 13:34:51 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:24.979 13:34:51 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:24.979 13:34:51 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:24.979 13:34:51 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:24.979 13:34:51 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:24.979 13:34:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:24.979 13:34:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:24.979 13:34:51 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:24.979 13:34:51 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:24.979 13:34:51 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:24.979 13:34:51 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:24.979 13:34:51 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:25.238 No valid GPT data, bailing 00:04:25.238 13:34:51 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:25.238 13:34:51 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:25.238 13:34:51 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:25.239 13:34:51 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:25.239 13:34:51 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:25.239 13:34:51 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:25.239 13:34:51 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:25.239 13:34:51 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:25.239 13:34:51 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:25.239 13:34:51 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:25.239 13:34:51 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:25.239 13:34:51 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:25.239 13:34:51 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:25.239 13:34:51 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.239 13:34:51 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.239 13:34:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:25.239 ************************************ 00:04:25.239 START TEST nvme_mount 00:04:25.239 ************************************ 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:25.239 13:34:51 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:26.179 Creating new GPT entries in memory. 00:04:26.179 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:26.179 other utilities. 00:04:26.179 13:34:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:26.179 13:34:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.179 13:34:52 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:26.179 13:34:52 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:26.179 13:34:52 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:27.121 Creating new GPT entries in memory. 00:04:27.121 The operation has completed successfully. 00:04:27.121 13:34:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:27.121 13:34:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.121 13:34:53 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 850399 00:04:27.121 13:34:53 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.121 13:34:53 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:27.121 13:34:53 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.382 13:34:53 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:27.382 13:34:53 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:27.382 13:34:53 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.382 13:34:53 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.382 13:34:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:27.382 13:34:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:27.382 13:34:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.382 13:34:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.382 13:34:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.382 13:34:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.382 13:34:53 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:27.382 13:34:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.382 13:34:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.382 13:34:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:27.382 13:34:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.382 13:34:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.382 13:34:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.683 13:34:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.683 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.683 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:30.683 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.683 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.683 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.683 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:30.683 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.683 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.683 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:30.683 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:30.944 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:30.944 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:30.944 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:31.206 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:31.206 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:31.206 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:31.206 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.206 13:34:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.590 13:35:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.590 13:35:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.590 13:35:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:34.590 13:35:01 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.590 13:35:01 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:34.590 13:35:01 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:34.590 13:35:01 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.854 13:35:01 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:34.854 13:35:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:34.854 13:35:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:34.854 13:35:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:34.854 13:35:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:34.854 13:35:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:34.854 13:35:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:34.854 13:35:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:34.854 13:35:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.854 13:35:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:34.854 13:35:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:34.854 13:35:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.854 13:35:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.156 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.156 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.156 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.156 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.156 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.156 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.156 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.156 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:38.157 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.416 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.416 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:38.416 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:38.416 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:38.416 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.416 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.416 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.416 13:35:04 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:38.416 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:38.416 00:04:38.416 real 0m13.156s 00:04:38.416 user 0m4.165s 00:04:38.416 sys 0m6.828s 00:04:38.416 13:35:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.416 13:35:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:38.416 ************************************ 00:04:38.416 END TEST nvme_mount 00:04:38.416 ************************************ 00:04:38.416 13:35:04 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:38.416 13:35:04 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:38.416 13:35:04 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.416 13:35:04 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.416 13:35:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:38.416 ************************************ 00:04:38.416 START TEST dm_mount 00:04:38.416 ************************************ 00:04:38.416 13:35:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:38.416 13:35:04 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:38.416 13:35:04 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:38.416 13:35:04 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:38.416 13:35:04 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:38.416 13:35:04 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:38.416 13:35:04 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:38.416 13:35:04 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:38.416 13:35:04 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:38.416 13:35:04 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:38.416 13:35:04 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:38.416 13:35:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:38.416 13:35:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.416 13:35:04 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:38.416 13:35:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:38.416 13:35:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.416 13:35:04 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:38.417 13:35:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:38.417 13:35:04 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.417 13:35:04 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:38.417 13:35:04 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:38.417 13:35:04 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:39.359 Creating new GPT entries in memory. 00:04:39.359 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:39.359 other utilities. 00:04:39.359 13:35:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:39.359 13:35:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.359 13:35:05 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:39.359 13:35:05 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:39.359 13:35:05 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:40.744 Creating new GPT entries in memory. 00:04:40.744 The operation has completed successfully. 00:04:40.744 13:35:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:40.744 13:35:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.744 13:35:06 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.744 13:35:06 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.744 13:35:06 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:41.689 The operation has completed successfully. 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 855504 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.689 13:35:07 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.990 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.249 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:45.249 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:45.249 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.249 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:45.249 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.249 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.249 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:45.249 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:45.249 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:45.249 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:45.249 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:45.249 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:45.249 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:45.249 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:45.249 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.249 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:45.249 13:35:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:45.249 13:35:11 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.249 13:35:11 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:48.546 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:48.546 13:35:14 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:48.546 00:04:48.546 real 0m10.040s 00:04:48.546 user 0m2.463s 00:04:48.546 sys 0m4.552s 00:04:48.547 13:35:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.547 13:35:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:48.547 ************************************ 00:04:48.547 END TEST dm_mount 00:04:48.547 ************************************ 00:04:48.547 13:35:14 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:48.547 13:35:14 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:48.547 13:35:14 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:48.547 13:35:14 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.547 13:35:14 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.547 13:35:14 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:48.547 13:35:14 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.547 13:35:14 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:48.808 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:48.808 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:48.808 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:48.808 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:48.808 13:35:15 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:48.808 13:35:15 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.808 13:35:15 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:48.808 13:35:15 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.808 13:35:15 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:48.808 13:35:15 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.808 13:35:15 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:48.808 00:04:48.808 real 0m27.721s 00:04:48.808 user 0m8.257s 00:04:48.808 sys 0m14.150s 00:04:48.808 13:35:15 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.808 13:35:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:48.808 ************************************ 00:04:48.808 END TEST devices 00:04:48.808 ************************************ 00:04:48.808 13:35:15 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:48.808 00:04:48.808 real 1m33.043s 00:04:48.808 user 0m31.010s 00:04:48.808 sys 0m53.064s 00:04:48.808 13:35:15 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.808 13:35:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:48.808 ************************************ 00:04:48.808 END TEST setup.sh 00:04:48.808 ************************************ 00:04:48.808 13:35:15 -- common/autotest_common.sh@1142 -- # return 0 00:04:48.808 13:35:15 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:52.144 Hugepages 00:04:52.144 node hugesize free / total 00:04:52.144 node0 1048576kB 0 / 0 00:04:52.144 node0 2048kB 2048 / 2048 00:04:52.144 node1 1048576kB 0 / 0 00:04:52.144 node1 2048kB 0 / 0 00:04:52.144 00:04:52.144 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:52.144 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:52.144 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:52.144 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:52.144 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:52.144 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:52.144 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:52.144 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:52.144 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:52.144 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:52.144 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:52.144 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:52.144 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:52.144 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:52.144 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:52.144 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:52.144 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:52.144 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:52.144 13:35:18 -- spdk/autotest.sh@130 -- # uname -s 00:04:52.144 13:35:18 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:52.144 13:35:18 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:52.144 13:35:18 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:55.441 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:55.441 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:55.441 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:55.441 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:55.441 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:55.441 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:55.441 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:55.441 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:55.441 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:55.701 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:55.701 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:55.701 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:55.701 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:55.701 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:55.701 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:55.701 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:57.686 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:57.686 13:35:24 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:58.627 13:35:25 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:58.627 13:35:25 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:58.627 13:35:25 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:58.627 13:35:25 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:58.627 13:35:25 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:58.627 13:35:25 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:58.627 13:35:25 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.627 13:35:25 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.627 13:35:25 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:58.888 13:35:25 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:58.888 13:35:25 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:04:58.888 13:35:25 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:02.191 Waiting for block devices as requested 00:05:02.191 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:02.191 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:02.191 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:02.191 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:02.451 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:02.451 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:02.451 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:02.712 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:02.712 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:02.973 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:02.973 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:02.973 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:03.232 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:03.232 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:03.232 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:03.232 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:03.493 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:03.754 13:35:30 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:03.754 13:35:30 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:03.754 13:35:30 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:03.754 13:35:30 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:05:03.754 13:35:30 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:03.754 13:35:30 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:03.754 13:35:30 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:03.754 13:35:30 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:03.754 13:35:30 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:03.754 13:35:30 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:03.754 13:35:30 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:03.754 13:35:30 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:03.754 13:35:30 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:03.754 13:35:30 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:05:03.754 13:35:30 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:03.754 13:35:30 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:03.754 13:35:30 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:03.754 13:35:30 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:03.754 13:35:30 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:03.754 13:35:30 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:03.754 13:35:30 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:03.754 13:35:30 -- common/autotest_common.sh@1557 -- # continue 00:05:03.754 13:35:30 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:03.754 13:35:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:03.754 13:35:30 -- common/autotest_common.sh@10 -- # set +x 00:05:03.754 13:35:30 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:03.754 13:35:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.754 13:35:30 -- common/autotest_common.sh@10 -- # set +x 00:05:03.754 13:35:30 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:07.053 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:07.053 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:07.053 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:07.053 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:07.053 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:07.053 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:07.053 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:07.053 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:07.053 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:07.053 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:07.053 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:07.313 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:07.313 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:07.313 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:07.313 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:07.313 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:07.313 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:07.573 13:35:33 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:07.573 13:35:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:07.573 13:35:33 -- common/autotest_common.sh@10 -- # set +x 00:05:07.573 13:35:34 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:07.573 13:35:34 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:07.573 13:35:34 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:07.573 13:35:34 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:07.573 13:35:34 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:07.573 13:35:34 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:07.573 13:35:34 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:07.573 13:35:34 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:07.573 13:35:34 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:07.573 13:35:34 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:07.573 13:35:34 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:07.834 13:35:34 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:07.834 13:35:34 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:07.834 13:35:34 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:07.834 13:35:34 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:07.834 13:35:34 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:05:07.834 13:35:34 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:07.834 13:35:34 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:07.834 13:35:34 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:07.834 13:35:34 -- common/autotest_common.sh@1593 -- # return 0 00:05:07.835 13:35:34 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:07.835 13:35:34 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:07.835 13:35:34 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:07.835 13:35:34 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:07.835 13:35:34 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:07.835 13:35:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:07.835 13:35:34 -- common/autotest_common.sh@10 -- # set +x 00:05:07.835 13:35:34 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:07.835 13:35:34 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:07.835 13:35:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.835 13:35:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.835 13:35:34 -- common/autotest_common.sh@10 -- # set +x 00:05:07.835 ************************************ 00:05:07.835 START TEST env 00:05:07.835 ************************************ 00:05:07.835 13:35:34 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:07.835 * Looking for test storage... 00:05:07.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:07.835 13:35:34 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:07.835 13:35:34 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.835 13:35:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.835 13:35:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.835 ************************************ 00:05:07.835 START TEST env_memory 00:05:07.835 ************************************ 00:05:07.835 13:35:34 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:07.835 00:05:07.835 00:05:07.835 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.835 http://cunit.sourceforge.net/ 00:05:07.835 00:05:07.835 00:05:07.835 Suite: memory 00:05:08.096 Test: alloc and free memory map ...[2024-07-15 13:35:34.363887] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:08.096 passed 00:05:08.096 Test: mem map translation ...[2024-07-15 13:35:34.389621] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:08.096 [2024-07-15 13:35:34.389658] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:08.096 [2024-07-15 13:35:34.389706] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:08.096 [2024-07-15 13:35:34.389713] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:08.096 passed 00:05:08.096 Test: mem map registration ...[2024-07-15 13:35:34.445118] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:08.096 [2024-07-15 13:35:34.445150] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:08.096 passed 00:05:08.096 Test: mem map adjacent registrations ...passed 00:05:08.096 00:05:08.096 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.096 suites 1 1 n/a 0 0 00:05:08.096 tests 4 4 4 0 0 00:05:08.096 asserts 152 152 152 0 n/a 00:05:08.096 00:05:08.096 Elapsed time = 0.193 seconds 00:05:08.096 00:05:08.096 real 0m0.208s 00:05:08.096 user 0m0.197s 00:05:08.096 sys 0m0.010s 00:05:08.096 13:35:34 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.096 13:35:34 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:08.096 ************************************ 00:05:08.096 END TEST env_memory 00:05:08.096 ************************************ 00:05:08.096 13:35:34 env -- common/autotest_common.sh@1142 -- # return 0 00:05:08.096 13:35:34 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:08.096 13:35:34 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.096 13:35:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.096 13:35:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.096 ************************************ 00:05:08.096 START TEST env_vtophys 00:05:08.096 ************************************ 00:05:08.096 13:35:34 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:08.357 EAL: lib.eal log level changed from notice to debug 00:05:08.357 EAL: Detected lcore 0 as core 0 on socket 0 00:05:08.357 EAL: Detected lcore 1 as core 1 on socket 0 00:05:08.357 EAL: Detected lcore 2 as core 2 on socket 0 00:05:08.357 EAL: Detected lcore 3 as core 3 on socket 0 00:05:08.357 EAL: Detected lcore 4 as core 4 on socket 0 00:05:08.357 EAL: Detected lcore 5 as core 5 on socket 0 00:05:08.357 EAL: Detected lcore 6 as core 6 on socket 0 00:05:08.357 EAL: Detected lcore 7 as core 7 on socket 0 00:05:08.357 EAL: Detected lcore 8 as core 8 on socket 0 00:05:08.357 EAL: Detected lcore 9 as core 9 on socket 0 00:05:08.357 EAL: Detected lcore 10 as core 10 on socket 0 00:05:08.357 EAL: Detected lcore 11 as core 11 on socket 0 00:05:08.357 EAL: Detected lcore 12 as core 12 on socket 0 00:05:08.357 EAL: Detected lcore 13 as core 13 on socket 0 00:05:08.357 EAL: Detected lcore 14 as core 14 on socket 0 00:05:08.357 EAL: Detected lcore 15 as core 15 on socket 0 00:05:08.357 EAL: Detected lcore 16 as core 16 on socket 0 00:05:08.357 EAL: Detected lcore 17 as core 17 on socket 0 00:05:08.357 EAL: Detected lcore 18 as core 18 on socket 0 00:05:08.357 EAL: Detected lcore 19 as core 19 on socket 0 00:05:08.357 EAL: Detected lcore 20 as core 20 on socket 0 00:05:08.357 EAL: Detected lcore 21 as core 21 on socket 0 00:05:08.357 EAL: Detected lcore 22 as core 22 on socket 0 00:05:08.358 EAL: Detected lcore 23 as core 23 on socket 0 00:05:08.358 EAL: Detected lcore 24 as core 24 on socket 0 00:05:08.358 EAL: Detected lcore 25 as core 25 on socket 0 00:05:08.358 EAL: Detected lcore 26 as core 26 on socket 0 00:05:08.358 EAL: Detected lcore 27 as core 27 on socket 0 00:05:08.358 EAL: Detected lcore 28 as core 28 on socket 0 00:05:08.358 EAL: Detected lcore 29 as core 29 on socket 0 00:05:08.358 EAL: Detected lcore 30 as core 30 on socket 0 00:05:08.358 EAL: Detected lcore 31 as core 31 on socket 0 00:05:08.358 EAL: Detected lcore 32 as core 32 on socket 0 00:05:08.358 EAL: Detected lcore 33 as core 33 on socket 0 00:05:08.358 EAL: Detected lcore 34 as core 34 on socket 0 00:05:08.358 EAL: Detected lcore 35 as core 35 on socket 0 00:05:08.358 EAL: Detected lcore 36 as core 0 on socket 1 00:05:08.358 EAL: Detected lcore 37 as core 1 on socket 1 00:05:08.358 EAL: Detected lcore 38 as core 2 on socket 1 00:05:08.358 EAL: Detected lcore 39 as core 3 on socket 1 00:05:08.358 EAL: Detected lcore 40 as core 4 on socket 1 00:05:08.358 EAL: Detected lcore 41 as core 5 on socket 1 00:05:08.358 EAL: Detected lcore 42 as core 6 on socket 1 00:05:08.358 EAL: Detected lcore 43 as core 7 on socket 1 00:05:08.358 EAL: Detected lcore 44 as core 8 on socket 1 00:05:08.358 EAL: Detected lcore 45 as core 9 on socket 1 00:05:08.358 EAL: Detected lcore 46 as core 10 on socket 1 00:05:08.358 EAL: Detected lcore 47 as core 11 on socket 1 00:05:08.358 EAL: Detected lcore 48 as core 12 on socket 1 00:05:08.358 EAL: Detected lcore 49 as core 13 on socket 1 00:05:08.358 EAL: Detected lcore 50 as core 14 on socket 1 00:05:08.358 EAL: Detected lcore 51 as core 15 on socket 1 00:05:08.358 EAL: Detected lcore 52 as core 16 on socket 1 00:05:08.358 EAL: Detected lcore 53 as core 17 on socket 1 00:05:08.358 EAL: Detected lcore 54 as core 18 on socket 1 00:05:08.358 EAL: Detected lcore 55 as core 19 on socket 1 00:05:08.358 EAL: Detected lcore 56 as core 20 on socket 1 00:05:08.358 EAL: Detected lcore 57 as core 21 on socket 1 00:05:08.358 EAL: Detected lcore 58 as core 22 on socket 1 00:05:08.358 EAL: Detected lcore 59 as core 23 on socket 1 00:05:08.358 EAL: Detected lcore 60 as core 24 on socket 1 00:05:08.358 EAL: Detected lcore 61 as core 25 on socket 1 00:05:08.358 EAL: Detected lcore 62 as core 26 on socket 1 00:05:08.358 EAL: Detected lcore 63 as core 27 on socket 1 00:05:08.358 EAL: Detected lcore 64 as core 28 on socket 1 00:05:08.358 EAL: Detected lcore 65 as core 29 on socket 1 00:05:08.358 EAL: Detected lcore 66 as core 30 on socket 1 00:05:08.358 EAL: Detected lcore 67 as core 31 on socket 1 00:05:08.358 EAL: Detected lcore 68 as core 32 on socket 1 00:05:08.358 EAL: Detected lcore 69 as core 33 on socket 1 00:05:08.358 EAL: Detected lcore 70 as core 34 on socket 1 00:05:08.358 EAL: Detected lcore 71 as core 35 on socket 1 00:05:08.358 EAL: Detected lcore 72 as core 0 on socket 0 00:05:08.358 EAL: Detected lcore 73 as core 1 on socket 0 00:05:08.358 EAL: Detected lcore 74 as core 2 on socket 0 00:05:08.358 EAL: Detected lcore 75 as core 3 on socket 0 00:05:08.358 EAL: Detected lcore 76 as core 4 on socket 0 00:05:08.358 EAL: Detected lcore 77 as core 5 on socket 0 00:05:08.358 EAL: Detected lcore 78 as core 6 on socket 0 00:05:08.358 EAL: Detected lcore 79 as core 7 on socket 0 00:05:08.358 EAL: Detected lcore 80 as core 8 on socket 0 00:05:08.358 EAL: Detected lcore 81 as core 9 on socket 0 00:05:08.358 EAL: Detected lcore 82 as core 10 on socket 0 00:05:08.358 EAL: Detected lcore 83 as core 11 on socket 0 00:05:08.358 EAL: Detected lcore 84 as core 12 on socket 0 00:05:08.358 EAL: Detected lcore 85 as core 13 on socket 0 00:05:08.358 EAL: Detected lcore 86 as core 14 on socket 0 00:05:08.358 EAL: Detected lcore 87 as core 15 on socket 0 00:05:08.358 EAL: Detected lcore 88 as core 16 on socket 0 00:05:08.358 EAL: Detected lcore 89 as core 17 on socket 0 00:05:08.358 EAL: Detected lcore 90 as core 18 on socket 0 00:05:08.358 EAL: Detected lcore 91 as core 19 on socket 0 00:05:08.358 EAL: Detected lcore 92 as core 20 on socket 0 00:05:08.358 EAL: Detected lcore 93 as core 21 on socket 0 00:05:08.358 EAL: Detected lcore 94 as core 22 on socket 0 00:05:08.358 EAL: Detected lcore 95 as core 23 on socket 0 00:05:08.358 EAL: Detected lcore 96 as core 24 on socket 0 00:05:08.358 EAL: Detected lcore 97 as core 25 on socket 0 00:05:08.358 EAL: Detected lcore 98 as core 26 on socket 0 00:05:08.358 EAL: Detected lcore 99 as core 27 on socket 0 00:05:08.358 EAL: Detected lcore 100 as core 28 on socket 0 00:05:08.358 EAL: Detected lcore 101 as core 29 on socket 0 00:05:08.358 EAL: Detected lcore 102 as core 30 on socket 0 00:05:08.358 EAL: Detected lcore 103 as core 31 on socket 0 00:05:08.358 EAL: Detected lcore 104 as core 32 on socket 0 00:05:08.358 EAL: Detected lcore 105 as core 33 on socket 0 00:05:08.358 EAL: Detected lcore 106 as core 34 on socket 0 00:05:08.358 EAL: Detected lcore 107 as core 35 on socket 0 00:05:08.358 EAL: Detected lcore 108 as core 0 on socket 1 00:05:08.358 EAL: Detected lcore 109 as core 1 on socket 1 00:05:08.358 EAL: Detected lcore 110 as core 2 on socket 1 00:05:08.358 EAL: Detected lcore 111 as core 3 on socket 1 00:05:08.358 EAL: Detected lcore 112 as core 4 on socket 1 00:05:08.358 EAL: Detected lcore 113 as core 5 on socket 1 00:05:08.358 EAL: Detected lcore 114 as core 6 on socket 1 00:05:08.358 EAL: Detected lcore 115 as core 7 on socket 1 00:05:08.358 EAL: Detected lcore 116 as core 8 on socket 1 00:05:08.358 EAL: Detected lcore 117 as core 9 on socket 1 00:05:08.358 EAL: Detected lcore 118 as core 10 on socket 1 00:05:08.358 EAL: Detected lcore 119 as core 11 on socket 1 00:05:08.358 EAL: Detected lcore 120 as core 12 on socket 1 00:05:08.358 EAL: Detected lcore 121 as core 13 on socket 1 00:05:08.358 EAL: Detected lcore 122 as core 14 on socket 1 00:05:08.358 EAL: Detected lcore 123 as core 15 on socket 1 00:05:08.358 EAL: Detected lcore 124 as core 16 on socket 1 00:05:08.358 EAL: Detected lcore 125 as core 17 on socket 1 00:05:08.358 EAL: Detected lcore 126 as core 18 on socket 1 00:05:08.358 EAL: Detected lcore 127 as core 19 on socket 1 00:05:08.358 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:08.358 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:08.358 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:08.358 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:08.358 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:08.358 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:08.358 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:08.358 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:08.358 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:08.358 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:08.358 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:08.358 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:08.358 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:08.358 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:08.358 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:08.358 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:08.358 EAL: Maximum logical cores by configuration: 128 00:05:08.358 EAL: Detected CPU lcores: 128 00:05:08.358 EAL: Detected NUMA nodes: 2 00:05:08.358 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:08.358 EAL: Detected shared linkage of DPDK 00:05:08.358 EAL: No shared files mode enabled, IPC will be disabled 00:05:08.358 EAL: Bus pci wants IOVA as 'DC' 00:05:08.358 EAL: Buses did not request a specific IOVA mode. 00:05:08.358 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:08.358 EAL: Selected IOVA mode 'VA' 00:05:08.358 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.358 EAL: Probing VFIO support... 00:05:08.358 EAL: IOMMU type 1 (Type 1) is supported 00:05:08.358 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:08.358 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:08.358 EAL: VFIO support initialized 00:05:08.358 EAL: Ask a virtual area of 0x2e000 bytes 00:05:08.358 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:08.358 EAL: Setting up physically contiguous memory... 00:05:08.358 EAL: Setting maximum number of open files to 524288 00:05:08.358 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:08.358 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:08.358 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:08.358 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.358 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:08.358 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.358 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.358 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:08.358 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:08.358 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.358 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:08.358 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.358 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.358 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:08.358 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:08.358 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.358 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:08.358 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.358 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.358 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:08.358 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:08.358 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.358 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:08.358 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.358 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.358 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:08.358 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:08.358 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:08.358 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.358 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:08.358 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.358 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.358 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:08.358 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:08.358 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.358 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:08.358 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.358 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.358 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:08.358 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:08.358 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.358 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:08.358 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.358 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.358 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:08.358 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:08.358 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.358 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:08.358 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.358 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.358 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:08.358 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:08.358 EAL: Hugepages will be freed exactly as allocated. 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: TSC frequency is ~2400000 KHz 00:05:08.359 EAL: Main lcore 0 is ready (tid=7f2aecd97a00;cpuset=[0]) 00:05:08.359 EAL: Trying to obtain current memory policy. 00:05:08.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.359 EAL: Restoring previous memory policy: 0 00:05:08.359 EAL: request: mp_malloc_sync 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: Heap on socket 0 was expanded by 2MB 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:08.359 EAL: Mem event callback 'spdk:(nil)' registered 00:05:08.359 00:05:08.359 00:05:08.359 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.359 http://cunit.sourceforge.net/ 00:05:08.359 00:05:08.359 00:05:08.359 Suite: components_suite 00:05:08.359 Test: vtophys_malloc_test ...passed 00:05:08.359 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:08.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.359 EAL: Restoring previous memory policy: 4 00:05:08.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.359 EAL: request: mp_malloc_sync 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: Heap on socket 0 was expanded by 4MB 00:05:08.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.359 EAL: request: mp_malloc_sync 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: Heap on socket 0 was shrunk by 4MB 00:05:08.359 EAL: Trying to obtain current memory policy. 00:05:08.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.359 EAL: Restoring previous memory policy: 4 00:05:08.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.359 EAL: request: mp_malloc_sync 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: Heap on socket 0 was expanded by 6MB 00:05:08.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.359 EAL: request: mp_malloc_sync 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: Heap on socket 0 was shrunk by 6MB 00:05:08.359 EAL: Trying to obtain current memory policy. 00:05:08.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.359 EAL: Restoring previous memory policy: 4 00:05:08.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.359 EAL: request: mp_malloc_sync 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: Heap on socket 0 was expanded by 10MB 00:05:08.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.359 EAL: request: mp_malloc_sync 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: Heap on socket 0 was shrunk by 10MB 00:05:08.359 EAL: Trying to obtain current memory policy. 00:05:08.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.359 EAL: Restoring previous memory policy: 4 00:05:08.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.359 EAL: request: mp_malloc_sync 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: Heap on socket 0 was expanded by 18MB 00:05:08.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.359 EAL: request: mp_malloc_sync 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: Heap on socket 0 was shrunk by 18MB 00:05:08.359 EAL: Trying to obtain current memory policy. 00:05:08.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.359 EAL: Restoring previous memory policy: 4 00:05:08.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.359 EAL: request: mp_malloc_sync 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: Heap on socket 0 was expanded by 34MB 00:05:08.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.359 EAL: request: mp_malloc_sync 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: Heap on socket 0 was shrunk by 34MB 00:05:08.359 EAL: Trying to obtain current memory policy. 00:05:08.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.359 EAL: Restoring previous memory policy: 4 00:05:08.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.359 EAL: request: mp_malloc_sync 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: Heap on socket 0 was expanded by 66MB 00:05:08.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.359 EAL: request: mp_malloc_sync 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: Heap on socket 0 was shrunk by 66MB 00:05:08.359 EAL: Trying to obtain current memory policy. 00:05:08.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.359 EAL: Restoring previous memory policy: 4 00:05:08.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.359 EAL: request: mp_malloc_sync 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: Heap on socket 0 was expanded by 130MB 00:05:08.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.359 EAL: request: mp_malloc_sync 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: Heap on socket 0 was shrunk by 130MB 00:05:08.359 EAL: Trying to obtain current memory policy. 00:05:08.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.359 EAL: Restoring previous memory policy: 4 00:05:08.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.359 EAL: request: mp_malloc_sync 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: Heap on socket 0 was expanded by 258MB 00:05:08.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.359 EAL: request: mp_malloc_sync 00:05:08.359 EAL: No shared files mode enabled, IPC is disabled 00:05:08.359 EAL: Heap on socket 0 was shrunk by 258MB 00:05:08.359 EAL: Trying to obtain current memory policy. 00:05:08.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.646 EAL: Restoring previous memory policy: 4 00:05:08.646 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.646 EAL: request: mp_malloc_sync 00:05:08.646 EAL: No shared files mode enabled, IPC is disabled 00:05:08.646 EAL: Heap on socket 0 was expanded by 514MB 00:05:08.646 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.646 EAL: request: mp_malloc_sync 00:05:08.646 EAL: No shared files mode enabled, IPC is disabled 00:05:08.646 EAL: Heap on socket 0 was shrunk by 514MB 00:05:08.646 EAL: Trying to obtain current memory policy. 00:05:08.646 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.910 EAL: Restoring previous memory policy: 4 00:05:08.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.910 EAL: request: mp_malloc_sync 00:05:08.910 EAL: No shared files mode enabled, IPC is disabled 00:05:08.910 EAL: Heap on socket 0 was expanded by 1026MB 00:05:08.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.910 EAL: request: mp_malloc_sync 00:05:08.910 EAL: No shared files mode enabled, IPC is disabled 00:05:08.910 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:08.910 passed 00:05:08.910 00:05:08.910 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.910 suites 1 1 n/a 0 0 00:05:08.910 tests 2 2 2 0 0 00:05:08.910 asserts 497 497 497 0 n/a 00:05:08.910 00:05:08.910 Elapsed time = 0.662 seconds 00:05:08.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.910 EAL: request: mp_malloc_sync 00:05:08.910 EAL: No shared files mode enabled, IPC is disabled 00:05:08.910 EAL: Heap on socket 0 was shrunk by 2MB 00:05:08.910 EAL: No shared files mode enabled, IPC is disabled 00:05:08.910 EAL: No shared files mode enabled, IPC is disabled 00:05:08.910 EAL: No shared files mode enabled, IPC is disabled 00:05:08.910 00:05:08.910 real 0m0.788s 00:05:08.910 user 0m0.411s 00:05:08.910 sys 0m0.348s 00:05:08.910 13:35:35 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.910 13:35:35 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:08.910 ************************************ 00:05:08.910 END TEST env_vtophys 00:05:08.910 ************************************ 00:05:08.910 13:35:35 env -- common/autotest_common.sh@1142 -- # return 0 00:05:08.910 13:35:35 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:08.910 13:35:35 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.910 13:35:35 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.910 13:35:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.170 ************************************ 00:05:09.170 START TEST env_pci 00:05:09.170 ************************************ 00:05:09.170 13:35:35 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:09.170 00:05:09.170 00:05:09.170 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.170 http://cunit.sourceforge.net/ 00:05:09.170 00:05:09.170 00:05:09.170 Suite: pci 00:05:09.170 Test: pci_hook ...[2024-07-15 13:35:35.483728] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 866626 has claimed it 00:05:09.170 EAL: Cannot find device (10000:00:01.0) 00:05:09.170 EAL: Failed to attach device on primary process 00:05:09.170 passed 00:05:09.170 00:05:09.170 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.170 suites 1 1 n/a 0 0 00:05:09.170 tests 1 1 1 0 0 00:05:09.170 asserts 25 25 25 0 n/a 00:05:09.170 00:05:09.170 Elapsed time = 0.030 seconds 00:05:09.170 00:05:09.170 real 0m0.050s 00:05:09.170 user 0m0.017s 00:05:09.170 sys 0m0.033s 00:05:09.170 13:35:35 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.170 13:35:35 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:09.170 ************************************ 00:05:09.170 END TEST env_pci 00:05:09.170 ************************************ 00:05:09.170 13:35:35 env -- common/autotest_common.sh@1142 -- # return 0 00:05:09.170 13:35:35 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:09.170 13:35:35 env -- env/env.sh@15 -- # uname 00:05:09.170 13:35:35 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:09.170 13:35:35 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:09.170 13:35:35 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:09.170 13:35:35 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:09.170 13:35:35 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.170 13:35:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.170 ************************************ 00:05:09.170 START TEST env_dpdk_post_init 00:05:09.170 ************************************ 00:05:09.170 13:35:35 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:09.170 EAL: Detected CPU lcores: 128 00:05:09.170 EAL: Detected NUMA nodes: 2 00:05:09.170 EAL: Detected shared linkage of DPDK 00:05:09.170 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:09.170 EAL: Selected IOVA mode 'VA' 00:05:09.170 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.170 EAL: VFIO support initialized 00:05:09.170 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:09.430 EAL: Using IOMMU type 1 (Type 1) 00:05:09.430 EAL: Ignore mapping IO port bar(1) 00:05:09.430 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:09.691 EAL: Ignore mapping IO port bar(1) 00:05:09.691 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:09.951 EAL: Ignore mapping IO port bar(1) 00:05:09.951 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:10.210 EAL: Ignore mapping IO port bar(1) 00:05:10.210 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:10.210 EAL: Ignore mapping IO port bar(1) 00:05:10.471 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:10.471 EAL: Ignore mapping IO port bar(1) 00:05:10.731 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:10.731 EAL: Ignore mapping IO port bar(1) 00:05:10.991 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:10.991 EAL: Ignore mapping IO port bar(1) 00:05:10.991 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:11.252 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:11.512 EAL: Ignore mapping IO port bar(1) 00:05:11.512 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:11.772 EAL: Ignore mapping IO port bar(1) 00:05:11.772 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:11.772 EAL: Ignore mapping IO port bar(1) 00:05:12.032 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:12.032 EAL: Ignore mapping IO port bar(1) 00:05:12.292 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:12.292 EAL: Ignore mapping IO port bar(1) 00:05:12.551 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:12.551 EAL: Ignore mapping IO port bar(1) 00:05:12.551 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:12.811 EAL: Ignore mapping IO port bar(1) 00:05:12.811 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:13.070 EAL: Ignore mapping IO port bar(1) 00:05:13.070 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:13.070 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:13.070 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:13.330 Starting DPDK initialization... 00:05:13.330 Starting SPDK post initialization... 00:05:13.330 SPDK NVMe probe 00:05:13.330 Attaching to 0000:65:00.0 00:05:13.330 Attached to 0000:65:00.0 00:05:13.330 Cleaning up... 00:05:15.237 00:05:15.237 real 0m5.715s 00:05:15.238 user 0m0.183s 00:05:15.238 sys 0m0.071s 00:05:15.238 13:35:41 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.238 13:35:41 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.238 ************************************ 00:05:15.238 END TEST env_dpdk_post_init 00:05:15.238 ************************************ 00:05:15.238 13:35:41 env -- common/autotest_common.sh@1142 -- # return 0 00:05:15.238 13:35:41 env -- env/env.sh@26 -- # uname 00:05:15.238 13:35:41 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:15.238 13:35:41 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.238 13:35:41 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.238 13:35:41 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.238 13:35:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.238 ************************************ 00:05:15.238 START TEST env_mem_callbacks 00:05:15.238 ************************************ 00:05:15.238 13:35:41 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.238 EAL: Detected CPU lcores: 128 00:05:15.238 EAL: Detected NUMA nodes: 2 00:05:15.238 EAL: Detected shared linkage of DPDK 00:05:15.238 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:15.238 EAL: Selected IOVA mode 'VA' 00:05:15.238 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.238 EAL: VFIO support initialized 00:05:15.238 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:15.238 00:05:15.238 00:05:15.238 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.238 http://cunit.sourceforge.net/ 00:05:15.238 00:05:15.238 00:05:15.238 Suite: memory 00:05:15.238 Test: test ... 00:05:15.238 register 0x200000200000 2097152 00:05:15.238 malloc 3145728 00:05:15.238 register 0x200000400000 4194304 00:05:15.238 buf 0x200000500000 len 3145728 PASSED 00:05:15.238 malloc 64 00:05:15.238 buf 0x2000004fff40 len 64 PASSED 00:05:15.238 malloc 4194304 00:05:15.238 register 0x200000800000 6291456 00:05:15.238 buf 0x200000a00000 len 4194304 PASSED 00:05:15.238 free 0x200000500000 3145728 00:05:15.238 free 0x2000004fff40 64 00:05:15.238 unregister 0x200000400000 4194304 PASSED 00:05:15.238 free 0x200000a00000 4194304 00:05:15.238 unregister 0x200000800000 6291456 PASSED 00:05:15.238 malloc 8388608 00:05:15.238 register 0x200000400000 10485760 00:05:15.238 buf 0x200000600000 len 8388608 PASSED 00:05:15.238 free 0x200000600000 8388608 00:05:15.238 unregister 0x200000400000 10485760 PASSED 00:05:15.238 passed 00:05:15.238 00:05:15.238 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.238 suites 1 1 n/a 0 0 00:05:15.238 tests 1 1 1 0 0 00:05:15.238 asserts 15 15 15 0 n/a 00:05:15.238 00:05:15.238 Elapsed time = 0.008 seconds 00:05:15.238 00:05:15.238 real 0m0.066s 00:05:15.238 user 0m0.021s 00:05:15.238 sys 0m0.045s 00:05:15.238 13:35:41 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.238 13:35:41 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:15.238 ************************************ 00:05:15.238 END TEST env_mem_callbacks 00:05:15.238 ************************************ 00:05:15.238 13:35:41 env -- common/autotest_common.sh@1142 -- # return 0 00:05:15.238 00:05:15.238 real 0m7.343s 00:05:15.238 user 0m1.029s 00:05:15.238 sys 0m0.850s 00:05:15.238 13:35:41 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.238 13:35:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.238 ************************************ 00:05:15.238 END TEST env 00:05:15.238 ************************************ 00:05:15.238 13:35:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:15.238 13:35:41 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:15.238 13:35:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.238 13:35:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.238 13:35:41 -- common/autotest_common.sh@10 -- # set +x 00:05:15.238 ************************************ 00:05:15.238 START TEST rpc 00:05:15.238 ************************************ 00:05:15.238 13:35:41 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:15.238 * Looking for test storage... 00:05:15.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:15.238 13:35:41 rpc -- rpc/rpc.sh@65 -- # spdk_pid=868072 00:05:15.238 13:35:41 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.238 13:35:41 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:15.238 13:35:41 rpc -- rpc/rpc.sh@67 -- # waitforlisten 868072 00:05:15.238 13:35:41 rpc -- common/autotest_common.sh@829 -- # '[' -z 868072 ']' 00:05:15.238 13:35:41 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.238 13:35:41 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.238 13:35:41 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.238 13:35:41 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.238 13:35:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.238 [2024-07-15 13:35:41.743215] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:15.238 [2024-07-15 13:35:41.743281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868072 ] 00:05:15.498 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.498 [2024-07-15 13:35:41.803329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.498 [2024-07-15 13:35:41.872187] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:15.498 [2024-07-15 13:35:41.872221] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 868072' to capture a snapshot of events at runtime. 00:05:15.498 [2024-07-15 13:35:41.872228] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:15.498 [2024-07-15 13:35:41.872235] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:15.498 [2024-07-15 13:35:41.872241] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid868072 for offline analysis/debug. 00:05:15.498 [2024-07-15 13:35:41.872259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.068 13:35:42 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.068 13:35:42 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:16.068 13:35:42 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.068 13:35:42 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.068 13:35:42 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:16.068 13:35:42 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:16.068 13:35:42 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.068 13:35:42 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.068 13:35:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.068 ************************************ 00:05:16.068 START TEST rpc_integrity 00:05:16.068 ************************************ 00:05:16.068 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:16.068 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:16.068 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.068 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.068 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.068 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:16.068 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:16.068 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:16.068 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:16.068 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.068 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.329 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.329 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:16.329 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:16.329 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.329 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.329 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.329 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:16.329 { 00:05:16.329 "name": "Malloc0", 00:05:16.329 "aliases": [ 00:05:16.329 "b79cf8a7-c75b-49e1-a1ee-54e39324fc7e" 00:05:16.329 ], 00:05:16.329 "product_name": "Malloc disk", 00:05:16.329 "block_size": 512, 00:05:16.329 "num_blocks": 16384, 00:05:16.329 "uuid": "b79cf8a7-c75b-49e1-a1ee-54e39324fc7e", 00:05:16.329 "assigned_rate_limits": { 00:05:16.329 "rw_ios_per_sec": 0, 00:05:16.329 "rw_mbytes_per_sec": 0, 00:05:16.329 "r_mbytes_per_sec": 0, 00:05:16.329 "w_mbytes_per_sec": 0 00:05:16.329 }, 00:05:16.329 "claimed": false, 00:05:16.329 "zoned": false, 00:05:16.329 "supported_io_types": { 00:05:16.329 "read": true, 00:05:16.329 "write": true, 00:05:16.329 "unmap": true, 00:05:16.329 "flush": true, 00:05:16.329 "reset": true, 00:05:16.329 "nvme_admin": false, 00:05:16.329 "nvme_io": false, 00:05:16.329 "nvme_io_md": false, 00:05:16.329 "write_zeroes": true, 00:05:16.329 "zcopy": true, 00:05:16.329 "get_zone_info": false, 00:05:16.329 "zone_management": false, 00:05:16.329 "zone_append": false, 00:05:16.329 "compare": false, 00:05:16.329 "compare_and_write": false, 00:05:16.330 "abort": true, 00:05:16.330 "seek_hole": false, 00:05:16.330 "seek_data": false, 00:05:16.330 "copy": true, 00:05:16.330 "nvme_iov_md": false 00:05:16.330 }, 00:05:16.330 "memory_domains": [ 00:05:16.330 { 00:05:16.330 "dma_device_id": "system", 00:05:16.330 "dma_device_type": 1 00:05:16.330 }, 00:05:16.330 { 00:05:16.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.330 "dma_device_type": 2 00:05:16.330 } 00:05:16.330 ], 00:05:16.330 "driver_specific": {} 00:05:16.330 } 00:05:16.330 ]' 00:05:16.330 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:16.330 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:16.330 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:16.330 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.330 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.330 [2024-07-15 13:35:42.666109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:16.330 [2024-07-15 13:35:42.666146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:16.330 [2024-07-15 13:35:42.666159] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcefd80 00:05:16.330 [2024-07-15 13:35:42.666166] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:16.330 [2024-07-15 13:35:42.667483] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:16.330 [2024-07-15 13:35:42.667504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:16.330 Passthru0 00:05:16.330 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.330 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:16.330 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.330 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.330 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.330 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:16.330 { 00:05:16.330 "name": "Malloc0", 00:05:16.330 "aliases": [ 00:05:16.330 "b79cf8a7-c75b-49e1-a1ee-54e39324fc7e" 00:05:16.330 ], 00:05:16.330 "product_name": "Malloc disk", 00:05:16.330 "block_size": 512, 00:05:16.330 "num_blocks": 16384, 00:05:16.330 "uuid": "b79cf8a7-c75b-49e1-a1ee-54e39324fc7e", 00:05:16.330 "assigned_rate_limits": { 00:05:16.330 "rw_ios_per_sec": 0, 00:05:16.330 "rw_mbytes_per_sec": 0, 00:05:16.330 "r_mbytes_per_sec": 0, 00:05:16.330 "w_mbytes_per_sec": 0 00:05:16.330 }, 00:05:16.330 "claimed": true, 00:05:16.330 "claim_type": "exclusive_write", 00:05:16.330 "zoned": false, 00:05:16.330 "supported_io_types": { 00:05:16.330 "read": true, 00:05:16.330 "write": true, 00:05:16.330 "unmap": true, 00:05:16.330 "flush": true, 00:05:16.330 "reset": true, 00:05:16.330 "nvme_admin": false, 00:05:16.330 "nvme_io": false, 00:05:16.330 "nvme_io_md": false, 00:05:16.330 "write_zeroes": true, 00:05:16.330 "zcopy": true, 00:05:16.330 "get_zone_info": false, 00:05:16.330 "zone_management": false, 00:05:16.330 "zone_append": false, 00:05:16.330 "compare": false, 00:05:16.330 "compare_and_write": false, 00:05:16.330 "abort": true, 00:05:16.330 "seek_hole": false, 00:05:16.330 "seek_data": false, 00:05:16.330 "copy": true, 00:05:16.330 "nvme_iov_md": false 00:05:16.330 }, 00:05:16.330 "memory_domains": [ 00:05:16.330 { 00:05:16.330 "dma_device_id": "system", 00:05:16.330 "dma_device_type": 1 00:05:16.330 }, 00:05:16.330 { 00:05:16.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.330 "dma_device_type": 2 00:05:16.330 } 00:05:16.330 ], 00:05:16.330 "driver_specific": {} 00:05:16.330 }, 00:05:16.330 { 00:05:16.330 "name": "Passthru0", 00:05:16.330 "aliases": [ 00:05:16.330 "0fe623ee-5d2f-5ca7-a4df-066d43b9695b" 00:05:16.330 ], 00:05:16.330 "product_name": "passthru", 00:05:16.330 "block_size": 512, 00:05:16.330 "num_blocks": 16384, 00:05:16.330 "uuid": "0fe623ee-5d2f-5ca7-a4df-066d43b9695b", 00:05:16.330 "assigned_rate_limits": { 00:05:16.330 "rw_ios_per_sec": 0, 00:05:16.330 "rw_mbytes_per_sec": 0, 00:05:16.330 "r_mbytes_per_sec": 0, 00:05:16.330 "w_mbytes_per_sec": 0 00:05:16.330 }, 00:05:16.330 "claimed": false, 00:05:16.330 "zoned": false, 00:05:16.330 "supported_io_types": { 00:05:16.330 "read": true, 00:05:16.330 "write": true, 00:05:16.330 "unmap": true, 00:05:16.330 "flush": true, 00:05:16.330 "reset": true, 00:05:16.330 "nvme_admin": false, 00:05:16.330 "nvme_io": false, 00:05:16.330 "nvme_io_md": false, 00:05:16.330 "write_zeroes": true, 00:05:16.330 "zcopy": true, 00:05:16.330 "get_zone_info": false, 00:05:16.330 "zone_management": false, 00:05:16.330 "zone_append": false, 00:05:16.330 "compare": false, 00:05:16.330 "compare_and_write": false, 00:05:16.330 "abort": true, 00:05:16.330 "seek_hole": false, 00:05:16.330 "seek_data": false, 00:05:16.330 "copy": true, 00:05:16.330 "nvme_iov_md": false 00:05:16.330 }, 00:05:16.330 "memory_domains": [ 00:05:16.330 { 00:05:16.330 "dma_device_id": "system", 00:05:16.330 "dma_device_type": 1 00:05:16.330 }, 00:05:16.330 { 00:05:16.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.330 "dma_device_type": 2 00:05:16.330 } 00:05:16.330 ], 00:05:16.330 "driver_specific": { 00:05:16.330 "passthru": { 00:05:16.330 "name": "Passthru0", 00:05:16.330 "base_bdev_name": "Malloc0" 00:05:16.330 } 00:05:16.330 } 00:05:16.330 } 00:05:16.330 ]' 00:05:16.330 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:16.330 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:16.330 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:16.330 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.330 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.330 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.330 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:16.330 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.330 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.330 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.330 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:16.330 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.330 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.330 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.330 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:16.330 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:16.330 13:35:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:16.330 00:05:16.330 real 0m0.299s 00:05:16.330 user 0m0.193s 00:05:16.330 sys 0m0.037s 00:05:16.330 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.330 13:35:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.330 ************************************ 00:05:16.330 END TEST rpc_integrity 00:05:16.330 ************************************ 00:05:16.591 13:35:42 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:16.591 13:35:42 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:16.591 13:35:42 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.591 13:35:42 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.591 13:35:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.591 ************************************ 00:05:16.591 START TEST rpc_plugins 00:05:16.591 ************************************ 00:05:16.591 13:35:42 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:16.591 13:35:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:16.591 13:35:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.591 13:35:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.591 13:35:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.591 13:35:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:16.591 13:35:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:16.591 13:35:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.591 13:35:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.591 13:35:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.591 13:35:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:16.591 { 00:05:16.591 "name": "Malloc1", 00:05:16.591 "aliases": [ 00:05:16.591 "a5f47db2-677b-4f13-bcc2-9613218d5e0e" 00:05:16.591 ], 00:05:16.591 "product_name": "Malloc disk", 00:05:16.591 "block_size": 4096, 00:05:16.591 "num_blocks": 256, 00:05:16.591 "uuid": "a5f47db2-677b-4f13-bcc2-9613218d5e0e", 00:05:16.591 "assigned_rate_limits": { 00:05:16.591 "rw_ios_per_sec": 0, 00:05:16.591 "rw_mbytes_per_sec": 0, 00:05:16.591 "r_mbytes_per_sec": 0, 00:05:16.591 "w_mbytes_per_sec": 0 00:05:16.591 }, 00:05:16.591 "claimed": false, 00:05:16.591 "zoned": false, 00:05:16.591 "supported_io_types": { 00:05:16.591 "read": true, 00:05:16.592 "write": true, 00:05:16.592 "unmap": true, 00:05:16.592 "flush": true, 00:05:16.592 "reset": true, 00:05:16.592 "nvme_admin": false, 00:05:16.592 "nvme_io": false, 00:05:16.592 "nvme_io_md": false, 00:05:16.592 "write_zeroes": true, 00:05:16.592 "zcopy": true, 00:05:16.592 "get_zone_info": false, 00:05:16.592 "zone_management": false, 00:05:16.592 "zone_append": false, 00:05:16.592 "compare": false, 00:05:16.592 "compare_and_write": false, 00:05:16.592 "abort": true, 00:05:16.592 "seek_hole": false, 00:05:16.592 "seek_data": false, 00:05:16.592 "copy": true, 00:05:16.592 "nvme_iov_md": false 00:05:16.592 }, 00:05:16.592 "memory_domains": [ 00:05:16.592 { 00:05:16.592 "dma_device_id": "system", 00:05:16.592 "dma_device_type": 1 00:05:16.592 }, 00:05:16.592 { 00:05:16.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.592 "dma_device_type": 2 00:05:16.592 } 00:05:16.592 ], 00:05:16.592 "driver_specific": {} 00:05:16.592 } 00:05:16.592 ]' 00:05:16.592 13:35:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:16.592 13:35:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:16.592 13:35:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:16.592 13:35:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.592 13:35:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.592 13:35:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.592 13:35:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:16.592 13:35:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.592 13:35:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.592 13:35:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.592 13:35:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:16.592 13:35:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:16.592 13:35:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:16.592 00:05:16.592 real 0m0.148s 00:05:16.592 user 0m0.093s 00:05:16.592 sys 0m0.022s 00:05:16.592 13:35:43 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.592 13:35:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.592 ************************************ 00:05:16.592 END TEST rpc_plugins 00:05:16.592 ************************************ 00:05:16.592 13:35:43 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:16.592 13:35:43 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:16.592 13:35:43 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.592 13:35:43 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.592 13:35:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.856 ************************************ 00:05:16.856 START TEST rpc_trace_cmd_test 00:05:16.856 ************************************ 00:05:16.856 13:35:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:16.856 13:35:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:16.856 13:35:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:16.856 13:35:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.856 13:35:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:16.856 13:35:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.856 13:35:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:16.856 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid868072", 00:05:16.856 "tpoint_group_mask": "0x8", 00:05:16.856 "iscsi_conn": { 00:05:16.856 "mask": "0x2", 00:05:16.856 "tpoint_mask": "0x0" 00:05:16.856 }, 00:05:16.856 "scsi": { 00:05:16.856 "mask": "0x4", 00:05:16.856 "tpoint_mask": "0x0" 00:05:16.856 }, 00:05:16.856 "bdev": { 00:05:16.856 "mask": "0x8", 00:05:16.856 "tpoint_mask": "0xffffffffffffffff" 00:05:16.856 }, 00:05:16.856 "nvmf_rdma": { 00:05:16.856 "mask": "0x10", 00:05:16.856 "tpoint_mask": "0x0" 00:05:16.856 }, 00:05:16.856 "nvmf_tcp": { 00:05:16.856 "mask": "0x20", 00:05:16.856 "tpoint_mask": "0x0" 00:05:16.856 }, 00:05:16.856 "ftl": { 00:05:16.856 "mask": "0x40", 00:05:16.856 "tpoint_mask": "0x0" 00:05:16.856 }, 00:05:16.856 "blobfs": { 00:05:16.856 "mask": "0x80", 00:05:16.856 "tpoint_mask": "0x0" 00:05:16.856 }, 00:05:16.856 "dsa": { 00:05:16.856 "mask": "0x200", 00:05:16.856 "tpoint_mask": "0x0" 00:05:16.856 }, 00:05:16.856 "thread": { 00:05:16.856 "mask": "0x400", 00:05:16.856 "tpoint_mask": "0x0" 00:05:16.856 }, 00:05:16.856 "nvme_pcie": { 00:05:16.856 "mask": "0x800", 00:05:16.856 "tpoint_mask": "0x0" 00:05:16.856 }, 00:05:16.856 "iaa": { 00:05:16.856 "mask": "0x1000", 00:05:16.856 "tpoint_mask": "0x0" 00:05:16.856 }, 00:05:16.856 "nvme_tcp": { 00:05:16.856 "mask": "0x2000", 00:05:16.856 "tpoint_mask": "0x0" 00:05:16.856 }, 00:05:16.856 "bdev_nvme": { 00:05:16.856 "mask": "0x4000", 00:05:16.856 "tpoint_mask": "0x0" 00:05:16.856 }, 00:05:16.856 "sock": { 00:05:16.856 "mask": "0x8000", 00:05:16.856 "tpoint_mask": "0x0" 00:05:16.856 } 00:05:16.856 }' 00:05:16.856 13:35:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:16.856 13:35:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:16.856 13:35:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:16.856 13:35:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:16.856 13:35:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:16.856 13:35:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:16.856 13:35:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:16.856 13:35:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:16.856 13:35:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:16.856 13:35:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:16.856 00:05:16.856 real 0m0.228s 00:05:16.856 user 0m0.188s 00:05:16.856 sys 0m0.031s 00:05:16.856 13:35:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.856 13:35:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:16.856 ************************************ 00:05:16.856 END TEST rpc_trace_cmd_test 00:05:16.856 ************************************ 00:05:17.158 13:35:43 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:17.158 13:35:43 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:17.158 13:35:43 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:17.158 13:35:43 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:17.158 13:35:43 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.158 13:35:43 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.158 13:35:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.158 ************************************ 00:05:17.158 START TEST rpc_daemon_integrity 00:05:17.158 ************************************ 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:17.158 { 00:05:17.158 "name": "Malloc2", 00:05:17.158 "aliases": [ 00:05:17.158 "80198900-96cd-4ab5-b3f6-3a4452cda4bf" 00:05:17.158 ], 00:05:17.158 "product_name": "Malloc disk", 00:05:17.158 "block_size": 512, 00:05:17.158 "num_blocks": 16384, 00:05:17.158 "uuid": "80198900-96cd-4ab5-b3f6-3a4452cda4bf", 00:05:17.158 "assigned_rate_limits": { 00:05:17.158 "rw_ios_per_sec": 0, 00:05:17.158 "rw_mbytes_per_sec": 0, 00:05:17.158 "r_mbytes_per_sec": 0, 00:05:17.158 "w_mbytes_per_sec": 0 00:05:17.158 }, 00:05:17.158 "claimed": false, 00:05:17.158 "zoned": false, 00:05:17.158 "supported_io_types": { 00:05:17.158 "read": true, 00:05:17.158 "write": true, 00:05:17.158 "unmap": true, 00:05:17.158 "flush": true, 00:05:17.158 "reset": true, 00:05:17.158 "nvme_admin": false, 00:05:17.158 "nvme_io": false, 00:05:17.158 "nvme_io_md": false, 00:05:17.158 "write_zeroes": true, 00:05:17.158 "zcopy": true, 00:05:17.158 "get_zone_info": false, 00:05:17.158 "zone_management": false, 00:05:17.158 "zone_append": false, 00:05:17.158 "compare": false, 00:05:17.158 "compare_and_write": false, 00:05:17.158 "abort": true, 00:05:17.158 "seek_hole": false, 00:05:17.158 "seek_data": false, 00:05:17.158 "copy": true, 00:05:17.158 "nvme_iov_md": false 00:05:17.158 }, 00:05:17.158 "memory_domains": [ 00:05:17.158 { 00:05:17.158 "dma_device_id": "system", 00:05:17.158 "dma_device_type": 1 00:05:17.158 }, 00:05:17.158 { 00:05:17.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.158 "dma_device_type": 2 00:05:17.158 } 00:05:17.158 ], 00:05:17.158 "driver_specific": {} 00:05:17.158 } 00:05:17.158 ]' 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.158 [2024-07-15 13:35:43.564548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:17.158 [2024-07-15 13:35:43.564578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:17.158 [2024-07-15 13:35:43.564591] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcf0a90 00:05:17.158 [2024-07-15 13:35:43.564598] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:17.158 [2024-07-15 13:35:43.565811] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:17.158 [2024-07-15 13:35:43.565831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:17.158 Passthru0 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:17.158 { 00:05:17.158 "name": "Malloc2", 00:05:17.158 "aliases": [ 00:05:17.158 "80198900-96cd-4ab5-b3f6-3a4452cda4bf" 00:05:17.158 ], 00:05:17.158 "product_name": "Malloc disk", 00:05:17.158 "block_size": 512, 00:05:17.158 "num_blocks": 16384, 00:05:17.158 "uuid": "80198900-96cd-4ab5-b3f6-3a4452cda4bf", 00:05:17.158 "assigned_rate_limits": { 00:05:17.158 "rw_ios_per_sec": 0, 00:05:17.158 "rw_mbytes_per_sec": 0, 00:05:17.158 "r_mbytes_per_sec": 0, 00:05:17.158 "w_mbytes_per_sec": 0 00:05:17.158 }, 00:05:17.158 "claimed": true, 00:05:17.158 "claim_type": "exclusive_write", 00:05:17.158 "zoned": false, 00:05:17.158 "supported_io_types": { 00:05:17.158 "read": true, 00:05:17.158 "write": true, 00:05:17.158 "unmap": true, 00:05:17.158 "flush": true, 00:05:17.158 "reset": true, 00:05:17.158 "nvme_admin": false, 00:05:17.158 "nvme_io": false, 00:05:17.158 "nvme_io_md": false, 00:05:17.158 "write_zeroes": true, 00:05:17.158 "zcopy": true, 00:05:17.158 "get_zone_info": false, 00:05:17.158 "zone_management": false, 00:05:17.158 "zone_append": false, 00:05:17.158 "compare": false, 00:05:17.158 "compare_and_write": false, 00:05:17.158 "abort": true, 00:05:17.158 "seek_hole": false, 00:05:17.158 "seek_data": false, 00:05:17.158 "copy": true, 00:05:17.158 "nvme_iov_md": false 00:05:17.158 }, 00:05:17.158 "memory_domains": [ 00:05:17.158 { 00:05:17.158 "dma_device_id": "system", 00:05:17.158 "dma_device_type": 1 00:05:17.158 }, 00:05:17.158 { 00:05:17.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.158 "dma_device_type": 2 00:05:17.158 } 00:05:17.158 ], 00:05:17.158 "driver_specific": {} 00:05:17.158 }, 00:05:17.158 { 00:05:17.158 "name": "Passthru0", 00:05:17.158 "aliases": [ 00:05:17.158 "bc9d8267-406e-5c9c-ab41-68c7d8e2cf3c" 00:05:17.158 ], 00:05:17.158 "product_name": "passthru", 00:05:17.158 "block_size": 512, 00:05:17.158 "num_blocks": 16384, 00:05:17.158 "uuid": "bc9d8267-406e-5c9c-ab41-68c7d8e2cf3c", 00:05:17.158 "assigned_rate_limits": { 00:05:17.158 "rw_ios_per_sec": 0, 00:05:17.158 "rw_mbytes_per_sec": 0, 00:05:17.158 "r_mbytes_per_sec": 0, 00:05:17.158 "w_mbytes_per_sec": 0 00:05:17.158 }, 00:05:17.158 "claimed": false, 00:05:17.158 "zoned": false, 00:05:17.158 "supported_io_types": { 00:05:17.158 "read": true, 00:05:17.158 "write": true, 00:05:17.158 "unmap": true, 00:05:17.158 "flush": true, 00:05:17.158 "reset": true, 00:05:17.158 "nvme_admin": false, 00:05:17.158 "nvme_io": false, 00:05:17.158 "nvme_io_md": false, 00:05:17.158 "write_zeroes": true, 00:05:17.158 "zcopy": true, 00:05:17.158 "get_zone_info": false, 00:05:17.158 "zone_management": false, 00:05:17.158 "zone_append": false, 00:05:17.158 "compare": false, 00:05:17.158 "compare_and_write": false, 00:05:17.158 "abort": true, 00:05:17.158 "seek_hole": false, 00:05:17.158 "seek_data": false, 00:05:17.158 "copy": true, 00:05:17.158 "nvme_iov_md": false 00:05:17.158 }, 00:05:17.158 "memory_domains": [ 00:05:17.158 { 00:05:17.158 "dma_device_id": "system", 00:05:17.158 "dma_device_type": 1 00:05:17.158 }, 00:05:17.158 { 00:05:17.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.158 "dma_device_type": 2 00:05:17.158 } 00:05:17.158 ], 00:05:17.158 "driver_specific": { 00:05:17.158 "passthru": { 00:05:17.158 "name": "Passthru0", 00:05:17.158 "base_bdev_name": "Malloc2" 00:05:17.158 } 00:05:17.158 } 00:05:17.158 } 00:05:17.158 ]' 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.158 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.422 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.422 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:17.422 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.422 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.422 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.422 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.422 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.422 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.422 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.422 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.422 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:17.422 13:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.422 00:05:17.422 real 0m0.295s 00:05:17.422 user 0m0.184s 00:05:17.422 sys 0m0.046s 00:05:17.422 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.422 13:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.422 ************************************ 00:05:17.422 END TEST rpc_daemon_integrity 00:05:17.422 ************************************ 00:05:17.422 13:35:43 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:17.422 13:35:43 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:17.422 13:35:43 rpc -- rpc/rpc.sh@84 -- # killprocess 868072 00:05:17.422 13:35:43 rpc -- common/autotest_common.sh@948 -- # '[' -z 868072 ']' 00:05:17.422 13:35:43 rpc -- common/autotest_common.sh@952 -- # kill -0 868072 00:05:17.422 13:35:43 rpc -- common/autotest_common.sh@953 -- # uname 00:05:17.422 13:35:43 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.422 13:35:43 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 868072 00:05:17.422 13:35:43 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.422 13:35:43 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.422 13:35:43 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 868072' 00:05:17.422 killing process with pid 868072 00:05:17.422 13:35:43 rpc -- common/autotest_common.sh@967 -- # kill 868072 00:05:17.422 13:35:43 rpc -- common/autotest_common.sh@972 -- # wait 868072 00:05:17.682 00:05:17.682 real 0m2.430s 00:05:17.682 user 0m3.205s 00:05:17.682 sys 0m0.667s 00:05:17.682 13:35:44 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.682 13:35:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.682 ************************************ 00:05:17.682 END TEST rpc 00:05:17.682 ************************************ 00:05:17.682 13:35:44 -- common/autotest_common.sh@1142 -- # return 0 00:05:17.682 13:35:44 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:17.682 13:35:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.682 13:35:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.682 13:35:44 -- common/autotest_common.sh@10 -- # set +x 00:05:17.682 ************************************ 00:05:17.682 START TEST skip_rpc 00:05:17.682 ************************************ 00:05:17.683 13:35:44 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:17.683 * Looking for test storage... 00:05:17.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:17.683 13:35:44 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:17.683 13:35:44 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:17.683 13:35:44 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:17.683 13:35:44 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.683 13:35:44 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.683 13:35:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.943 ************************************ 00:05:17.943 START TEST skip_rpc 00:05:17.943 ************************************ 00:05:17.943 13:35:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:17.943 13:35:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=868601 00:05:17.943 13:35:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.943 13:35:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:17.943 13:35:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:17.943 [2024-07-15 13:35:44.285230] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:17.943 [2024-07-15 13:35:44.285296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868601 ] 00:05:17.943 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.943 [2024-07-15 13:35:44.350949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.943 [2024-07-15 13:35:44.424067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 868601 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 868601 ']' 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 868601 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 868601 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 868601' 00:05:23.230 killing process with pid 868601 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 868601 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 868601 00:05:23.230 00:05:23.230 real 0m5.279s 00:05:23.230 user 0m5.083s 00:05:23.230 sys 0m0.231s 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.230 13:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.230 ************************************ 00:05:23.230 END TEST skip_rpc 00:05:23.230 ************************************ 00:05:23.230 13:35:49 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:23.230 13:35:49 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:23.230 13:35:49 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.230 13:35:49 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.230 13:35:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.230 ************************************ 00:05:23.230 START TEST skip_rpc_with_json 00:05:23.230 ************************************ 00:05:23.230 13:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:23.230 13:35:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:23.230 13:35:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=869783 00:05:23.230 13:35:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.230 13:35:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 869783 00:05:23.230 13:35:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.230 13:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 869783 ']' 00:05:23.230 13:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.230 13:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.230 13:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.230 13:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.230 13:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.230 [2024-07-15 13:35:49.633293] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:23.230 [2024-07-15 13:35:49.633347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869783 ] 00:05:23.230 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.230 [2024-07-15 13:35:49.695190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.490 [2024-07-15 13:35:49.764986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.060 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.060 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:24.060 13:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:24.060 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.060 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.060 [2024-07-15 13:35:50.408690] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:24.060 request: 00:05:24.060 { 00:05:24.060 "trtype": "tcp", 00:05:24.060 "method": "nvmf_get_transports", 00:05:24.060 "req_id": 1 00:05:24.060 } 00:05:24.060 Got JSON-RPC error response 00:05:24.060 response: 00:05:24.060 { 00:05:24.060 "code": -19, 00:05:24.060 "message": "No such device" 00:05:24.060 } 00:05:24.060 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:24.060 13:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:24.060 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.060 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.060 [2024-07-15 13:35:50.420819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.060 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.060 13:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:24.060 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.060 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.060 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.060 13:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:24.320 { 00:05:24.320 "subsystems": [ 00:05:24.320 { 00:05:24.320 "subsystem": "vfio_user_target", 00:05:24.320 "config": null 00:05:24.320 }, 00:05:24.320 { 00:05:24.320 "subsystem": "keyring", 00:05:24.320 "config": [] 00:05:24.320 }, 00:05:24.320 { 00:05:24.320 "subsystem": "iobuf", 00:05:24.320 "config": [ 00:05:24.320 { 00:05:24.320 "method": "iobuf_set_options", 00:05:24.320 "params": { 00:05:24.320 "small_pool_count": 8192, 00:05:24.320 "large_pool_count": 1024, 00:05:24.320 "small_bufsize": 8192, 00:05:24.320 "large_bufsize": 135168 00:05:24.320 } 00:05:24.320 } 00:05:24.320 ] 00:05:24.320 }, 00:05:24.320 { 00:05:24.320 "subsystem": "sock", 00:05:24.320 "config": [ 00:05:24.320 { 00:05:24.321 "method": "sock_set_default_impl", 00:05:24.321 "params": { 00:05:24.321 "impl_name": "posix" 00:05:24.321 } 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "method": "sock_impl_set_options", 00:05:24.321 "params": { 00:05:24.321 "impl_name": "ssl", 00:05:24.321 "recv_buf_size": 4096, 00:05:24.321 "send_buf_size": 4096, 00:05:24.321 "enable_recv_pipe": true, 00:05:24.321 "enable_quickack": false, 00:05:24.321 "enable_placement_id": 0, 00:05:24.321 "enable_zerocopy_send_server": true, 00:05:24.321 "enable_zerocopy_send_client": false, 00:05:24.321 "zerocopy_threshold": 0, 00:05:24.321 "tls_version": 0, 00:05:24.321 "enable_ktls": false 00:05:24.321 } 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "method": "sock_impl_set_options", 00:05:24.321 "params": { 00:05:24.321 "impl_name": "posix", 00:05:24.321 "recv_buf_size": 2097152, 00:05:24.321 "send_buf_size": 2097152, 00:05:24.321 "enable_recv_pipe": true, 00:05:24.321 "enable_quickack": false, 00:05:24.321 "enable_placement_id": 0, 00:05:24.321 "enable_zerocopy_send_server": true, 00:05:24.321 "enable_zerocopy_send_client": false, 00:05:24.321 "zerocopy_threshold": 0, 00:05:24.321 "tls_version": 0, 00:05:24.321 "enable_ktls": false 00:05:24.321 } 00:05:24.321 } 00:05:24.321 ] 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "subsystem": "vmd", 00:05:24.321 "config": [] 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "subsystem": "accel", 00:05:24.321 "config": [ 00:05:24.321 { 00:05:24.321 "method": "accel_set_options", 00:05:24.321 "params": { 00:05:24.321 "small_cache_size": 128, 00:05:24.321 "large_cache_size": 16, 00:05:24.321 "task_count": 2048, 00:05:24.321 "sequence_count": 2048, 00:05:24.321 "buf_count": 2048 00:05:24.321 } 00:05:24.321 } 00:05:24.321 ] 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "subsystem": "bdev", 00:05:24.321 "config": [ 00:05:24.321 { 00:05:24.321 "method": "bdev_set_options", 00:05:24.321 "params": { 00:05:24.321 "bdev_io_pool_size": 65535, 00:05:24.321 "bdev_io_cache_size": 256, 00:05:24.321 "bdev_auto_examine": true, 00:05:24.321 "iobuf_small_cache_size": 128, 00:05:24.321 "iobuf_large_cache_size": 16 00:05:24.321 } 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "method": "bdev_raid_set_options", 00:05:24.321 "params": { 00:05:24.321 "process_window_size_kb": 1024 00:05:24.321 } 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "method": "bdev_iscsi_set_options", 00:05:24.321 "params": { 00:05:24.321 "timeout_sec": 30 00:05:24.321 } 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "method": "bdev_nvme_set_options", 00:05:24.321 "params": { 00:05:24.321 "action_on_timeout": "none", 00:05:24.321 "timeout_us": 0, 00:05:24.321 "timeout_admin_us": 0, 00:05:24.321 "keep_alive_timeout_ms": 10000, 00:05:24.321 "arbitration_burst": 0, 00:05:24.321 "low_priority_weight": 0, 00:05:24.321 "medium_priority_weight": 0, 00:05:24.321 "high_priority_weight": 0, 00:05:24.321 "nvme_adminq_poll_period_us": 10000, 00:05:24.321 "nvme_ioq_poll_period_us": 0, 00:05:24.321 "io_queue_requests": 0, 00:05:24.321 "delay_cmd_submit": true, 00:05:24.321 "transport_retry_count": 4, 00:05:24.321 "bdev_retry_count": 3, 00:05:24.321 "transport_ack_timeout": 0, 00:05:24.321 "ctrlr_loss_timeout_sec": 0, 00:05:24.321 "reconnect_delay_sec": 0, 00:05:24.321 "fast_io_fail_timeout_sec": 0, 00:05:24.321 "disable_auto_failback": false, 00:05:24.321 "generate_uuids": false, 00:05:24.321 "transport_tos": 0, 00:05:24.321 "nvme_error_stat": false, 00:05:24.321 "rdma_srq_size": 0, 00:05:24.321 "io_path_stat": false, 00:05:24.321 "allow_accel_sequence": false, 00:05:24.321 "rdma_max_cq_size": 0, 00:05:24.321 "rdma_cm_event_timeout_ms": 0, 00:05:24.321 "dhchap_digests": [ 00:05:24.321 "sha256", 00:05:24.321 "sha384", 00:05:24.321 "sha512" 00:05:24.321 ], 00:05:24.321 "dhchap_dhgroups": [ 00:05:24.321 "null", 00:05:24.321 "ffdhe2048", 00:05:24.321 "ffdhe3072", 00:05:24.321 "ffdhe4096", 00:05:24.321 "ffdhe6144", 00:05:24.321 "ffdhe8192" 00:05:24.321 ] 00:05:24.321 } 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "method": "bdev_nvme_set_hotplug", 00:05:24.321 "params": { 00:05:24.321 "period_us": 100000, 00:05:24.321 "enable": false 00:05:24.321 } 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "method": "bdev_wait_for_examine" 00:05:24.321 } 00:05:24.321 ] 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "subsystem": "scsi", 00:05:24.321 "config": null 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "subsystem": "scheduler", 00:05:24.321 "config": [ 00:05:24.321 { 00:05:24.321 "method": "framework_set_scheduler", 00:05:24.321 "params": { 00:05:24.321 "name": "static" 00:05:24.321 } 00:05:24.321 } 00:05:24.321 ] 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "subsystem": "vhost_scsi", 00:05:24.321 "config": [] 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "subsystem": "vhost_blk", 00:05:24.321 "config": [] 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "subsystem": "ublk", 00:05:24.321 "config": [] 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "subsystem": "nbd", 00:05:24.321 "config": [] 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "subsystem": "nvmf", 00:05:24.321 "config": [ 00:05:24.321 { 00:05:24.321 "method": "nvmf_set_config", 00:05:24.321 "params": { 00:05:24.321 "discovery_filter": "match_any", 00:05:24.321 "admin_cmd_passthru": { 00:05:24.321 "identify_ctrlr": false 00:05:24.321 } 00:05:24.321 } 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "method": "nvmf_set_max_subsystems", 00:05:24.321 "params": { 00:05:24.321 "max_subsystems": 1024 00:05:24.321 } 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "method": "nvmf_set_crdt", 00:05:24.321 "params": { 00:05:24.321 "crdt1": 0, 00:05:24.321 "crdt2": 0, 00:05:24.321 "crdt3": 0 00:05:24.321 } 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "method": "nvmf_create_transport", 00:05:24.321 "params": { 00:05:24.321 "trtype": "TCP", 00:05:24.321 "max_queue_depth": 128, 00:05:24.321 "max_io_qpairs_per_ctrlr": 127, 00:05:24.321 "in_capsule_data_size": 4096, 00:05:24.321 "max_io_size": 131072, 00:05:24.321 "io_unit_size": 131072, 00:05:24.321 "max_aq_depth": 128, 00:05:24.321 "num_shared_buffers": 511, 00:05:24.321 "buf_cache_size": 4294967295, 00:05:24.321 "dif_insert_or_strip": false, 00:05:24.321 "zcopy": false, 00:05:24.321 "c2h_success": true, 00:05:24.321 "sock_priority": 0, 00:05:24.321 "abort_timeout_sec": 1, 00:05:24.321 "ack_timeout": 0, 00:05:24.321 "data_wr_pool_size": 0 00:05:24.321 } 00:05:24.321 } 00:05:24.321 ] 00:05:24.321 }, 00:05:24.321 { 00:05:24.321 "subsystem": "iscsi", 00:05:24.321 "config": [ 00:05:24.321 { 00:05:24.321 "method": "iscsi_set_options", 00:05:24.321 "params": { 00:05:24.321 "node_base": "iqn.2016-06.io.spdk", 00:05:24.321 "max_sessions": 128, 00:05:24.321 "max_connections_per_session": 2, 00:05:24.321 "max_queue_depth": 64, 00:05:24.321 "default_time2wait": 2, 00:05:24.321 "default_time2retain": 20, 00:05:24.321 "first_burst_length": 8192, 00:05:24.321 "immediate_data": true, 00:05:24.321 "allow_duplicated_isid": false, 00:05:24.321 "error_recovery_level": 0, 00:05:24.321 "nop_timeout": 60, 00:05:24.321 "nop_in_interval": 30, 00:05:24.321 "disable_chap": false, 00:05:24.321 "require_chap": false, 00:05:24.321 "mutual_chap": false, 00:05:24.321 "chap_group": 0, 00:05:24.321 "max_large_datain_per_connection": 64, 00:05:24.321 "max_r2t_per_connection": 4, 00:05:24.321 "pdu_pool_size": 36864, 00:05:24.321 "immediate_data_pool_size": 16384, 00:05:24.321 "data_out_pool_size": 2048 00:05:24.321 } 00:05:24.321 } 00:05:24.321 ] 00:05:24.321 } 00:05:24.321 ] 00:05:24.321 } 00:05:24.321 13:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:24.321 13:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 869783 00:05:24.321 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 869783 ']' 00:05:24.321 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 869783 00:05:24.321 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:24.321 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.321 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 869783 00:05:24.321 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.321 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.321 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 869783' 00:05:24.321 killing process with pid 869783 00:05:24.321 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 869783 00:05:24.322 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 869783 00:05:24.582 13:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=869977 00:05:24.582 13:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:24.582 13:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:29.867 13:35:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 869977 00:05:29.867 13:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 869977 ']' 00:05:29.867 13:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 869977 00:05:29.868 13:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:29.868 13:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:29.868 13:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 869977 00:05:29.868 13:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:29.868 13:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:29.868 13:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 869977' 00:05:29.868 killing process with pid 869977 00:05:29.868 13:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 869977 00:05:29.868 13:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 869977 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:29.868 00:05:29.868 real 0m6.547s 00:05:29.868 user 0m6.414s 00:05:29.868 sys 0m0.540s 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.868 ************************************ 00:05:29.868 END TEST skip_rpc_with_json 00:05:29.868 ************************************ 00:05:29.868 13:35:56 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:29.868 13:35:56 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:29.868 13:35:56 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.868 13:35:56 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.868 13:35:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.868 ************************************ 00:05:29.868 START TEST skip_rpc_with_delay 00:05:29.868 ************************************ 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:29.868 [2024-07-15 13:35:56.264425] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:29.868 [2024-07-15 13:35:56.264527] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:29.868 00:05:29.868 real 0m0.077s 00:05:29.868 user 0m0.050s 00:05:29.868 sys 0m0.027s 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.868 13:35:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:29.868 ************************************ 00:05:29.868 END TEST skip_rpc_with_delay 00:05:29.868 ************************************ 00:05:29.868 13:35:56 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:29.868 13:35:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:29.868 13:35:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:29.868 13:35:56 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:29.868 13:35:56 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.868 13:35:56 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.868 13:35:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.868 ************************************ 00:05:29.868 START TEST exit_on_failed_rpc_init 00:05:29.868 ************************************ 00:05:29.868 13:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:29.868 13:35:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=871219 00:05:29.868 13:35:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 871219 00:05:29.868 13:35:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.868 13:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 871219 ']' 00:05:29.868 13:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.868 13:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.868 13:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.868 13:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.868 13:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:30.129 [2024-07-15 13:35:56.421189] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:30.129 [2024-07-15 13:35:56.421248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871219 ] 00:05:30.129 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.129 [2024-07-15 13:35:56.486253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.129 [2024-07-15 13:35:56.561898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.700 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.700 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:30.700 13:35:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.700 13:35:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.700 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:30.700 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.700 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.700 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.700 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.700 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.700 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.700 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.701 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.701 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:30.701 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.961 [2024-07-15 13:35:57.258200] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:30.961 [2024-07-15 13:35:57.258251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871373 ] 00:05:30.961 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.961 [2024-07-15 13:35:57.332044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.961 [2024-07-15 13:35:57.396066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.961 [2024-07-15 13:35:57.396134] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:30.961 [2024-07-15 13:35:57.396143] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:30.961 [2024-07-15 13:35:57.396150] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:30.961 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:30.961 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:30.961 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:30.961 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:30.961 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:30.961 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:30.961 13:35:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:30.961 13:35:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 871219 00:05:30.961 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 871219 ']' 00:05:30.961 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 871219 00:05:30.961 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:30.961 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.961 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 871219 00:05:31.222 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.222 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.222 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 871219' 00:05:31.222 killing process with pid 871219 00:05:31.222 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 871219 00:05:31.222 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 871219 00:05:31.222 00:05:31.222 real 0m1.355s 00:05:31.222 user 0m1.577s 00:05:31.222 sys 0m0.393s 00:05:31.222 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.222 13:35:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.222 ************************************ 00:05:31.222 END TEST exit_on_failed_rpc_init 00:05:31.222 ************************************ 00:05:31.484 13:35:57 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:31.484 13:35:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:31.484 00:05:31.484 real 0m13.663s 00:05:31.484 user 0m13.271s 00:05:31.484 sys 0m1.471s 00:05:31.484 13:35:57 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.484 13:35:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.484 ************************************ 00:05:31.484 END TEST skip_rpc 00:05:31.484 ************************************ 00:05:31.484 13:35:57 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.484 13:35:57 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:31.484 13:35:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.484 13:35:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.484 13:35:57 -- common/autotest_common.sh@10 -- # set +x 00:05:31.484 ************************************ 00:05:31.484 START TEST rpc_client 00:05:31.484 ************************************ 00:05:31.484 13:35:57 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:31.484 * Looking for test storage... 00:05:31.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:31.484 13:35:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:31.484 OK 00:05:31.484 13:35:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:31.485 00:05:31.485 real 0m0.128s 00:05:31.485 user 0m0.060s 00:05:31.485 sys 0m0.077s 00:05:31.485 13:35:57 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.485 13:35:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:31.485 ************************************ 00:05:31.485 END TEST rpc_client 00:05:31.485 ************************************ 00:05:31.485 13:35:58 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.485 13:35:58 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:31.485 13:35:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.485 13:35:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.485 13:35:58 -- common/autotest_common.sh@10 -- # set +x 00:05:31.747 ************************************ 00:05:31.747 START TEST json_config 00:05:31.747 ************************************ 00:05:31.747 13:35:58 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:31.747 13:35:58 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.747 13:35:58 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.747 13:35:58 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.747 13:35:58 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.747 13:35:58 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.747 13:35:58 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.747 13:35:58 json_config -- paths/export.sh@5 -- # export PATH 00:05:31.747 13:35:58 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@47 -- # : 0 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:31.747 13:35:58 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:31.747 INFO: JSON configuration test init 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:31.747 13:35:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:31.747 13:35:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:31.747 13:35:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:31.747 13:35:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.747 13:35:58 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:31.747 13:35:58 json_config -- json_config/common.sh@9 -- # local app=target 00:05:31.747 13:35:58 json_config -- json_config/common.sh@10 -- # shift 00:05:31.747 13:35:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:31.747 13:35:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:31.747 13:35:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:31.747 13:35:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.747 13:35:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.747 13:35:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=871734 00:05:31.747 13:35:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:31.747 Waiting for target to run... 00:05:31.747 13:35:58 json_config -- json_config/common.sh@25 -- # waitforlisten 871734 /var/tmp/spdk_tgt.sock 00:05:31.747 13:35:58 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:31.747 13:35:58 json_config -- common/autotest_common.sh@829 -- # '[' -z 871734 ']' 00:05:31.747 13:35:58 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.747 13:35:58 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.747 13:35:58 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.747 13:35:58 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.747 13:35:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.747 [2024-07-15 13:35:58.218088] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:31.747 [2024-07-15 13:35:58.218168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871734 ] 00:05:31.747 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.008 [2024-07-15 13:35:58.493780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.269 [2024-07-15 13:35:58.548020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.530 13:35:58 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.530 13:35:58 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:32.530 13:35:58 json_config -- json_config/common.sh@26 -- # echo '' 00:05:32.530 00:05:32.530 13:35:58 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:32.530 13:35:58 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:32.530 13:35:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.530 13:35:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.530 13:35:58 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:32.530 13:35:58 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:32.530 13:35:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.530 13:35:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.530 13:35:59 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:32.530 13:35:59 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:32.530 13:35:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:33.100 13:35:59 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:33.101 13:35:59 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:33.101 13:35:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.101 13:35:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.101 13:35:59 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:33.101 13:35:59 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:33.101 13:35:59 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:33.101 13:35:59 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:33.101 13:35:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:33.101 13:35:59 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:33.361 13:35:59 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:33.361 13:35:59 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:33.361 13:35:59 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:33.361 13:35:59 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:33.361 13:35:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.361 13:35:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.362 13:35:59 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:33.362 13:35:59 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:33.362 13:35:59 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:33.362 13:35:59 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:33.362 13:35:59 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:33.362 13:35:59 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:33.362 13:35:59 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:33.362 13:35:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.362 13:35:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.362 13:35:59 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:33.362 13:35:59 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:33.362 13:35:59 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:33.362 13:35:59 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.362 13:35:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.623 MallocForNvmf0 00:05:33.623 13:35:59 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.623 13:35:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.623 MallocForNvmf1 00:05:33.623 13:36:00 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.623 13:36:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.884 [2024-07-15 13:36:00.235202] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.884 13:36:00 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.884 13:36:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.884 13:36:00 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:33.884 13:36:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.146 13:36:00 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.146 13:36:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.407 13:36:00 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:34.407 13:36:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:34.407 [2024-07-15 13:36:00.849169] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:34.407 13:36:00 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:34.407 13:36:00 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.407 13:36:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.407 13:36:00 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:34.407 13:36:00 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.407 13:36:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.668 13:36:00 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:34.668 13:36:00 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:34.668 13:36:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:34.668 MallocBdevForConfigChangeCheck 00:05:34.668 13:36:01 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:34.668 13:36:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.668 13:36:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.668 13:36:01 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:34.668 13:36:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.929 13:36:01 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:34.929 INFO: shutting down applications... 00:05:34.929 13:36:01 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:34.929 13:36:01 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:34.929 13:36:01 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:34.929 13:36:01 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:35.500 Calling clear_iscsi_subsystem 00:05:35.500 Calling clear_nvmf_subsystem 00:05:35.500 Calling clear_nbd_subsystem 00:05:35.500 Calling clear_ublk_subsystem 00:05:35.500 Calling clear_vhost_blk_subsystem 00:05:35.500 Calling clear_vhost_scsi_subsystem 00:05:35.500 Calling clear_bdev_subsystem 00:05:35.500 13:36:01 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:35.500 13:36:01 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:35.500 13:36:01 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:35.500 13:36:01 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.500 13:36:01 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:35.500 13:36:01 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:35.760 13:36:02 json_config -- json_config/json_config.sh@345 -- # break 00:05:35.760 13:36:02 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:35.760 13:36:02 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:35.760 13:36:02 json_config -- json_config/common.sh@31 -- # local app=target 00:05:35.760 13:36:02 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:35.760 13:36:02 json_config -- json_config/common.sh@35 -- # [[ -n 871734 ]] 00:05:35.760 13:36:02 json_config -- json_config/common.sh@38 -- # kill -SIGINT 871734 00:05:35.761 13:36:02 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:35.761 13:36:02 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.761 13:36:02 json_config -- json_config/common.sh@41 -- # kill -0 871734 00:05:35.761 13:36:02 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.332 13:36:02 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.332 13:36:02 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.332 13:36:02 json_config -- json_config/common.sh@41 -- # kill -0 871734 00:05:36.332 13:36:02 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:36.332 13:36:02 json_config -- json_config/common.sh@43 -- # break 00:05:36.332 13:36:02 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:36.332 13:36:02 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:36.332 SPDK target shutdown done 00:05:36.332 13:36:02 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:36.332 INFO: relaunching applications... 00:05:36.332 13:36:02 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.332 13:36:02 json_config -- json_config/common.sh@9 -- # local app=target 00:05:36.332 13:36:02 json_config -- json_config/common.sh@10 -- # shift 00:05:36.332 13:36:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.332 13:36:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.332 13:36:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.332 13:36:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.332 13:36:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.332 13:36:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=872732 00:05:36.332 13:36:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:36.332 Waiting for target to run... 00:05:36.332 13:36:02 json_config -- json_config/common.sh@25 -- # waitforlisten 872732 /var/tmp/spdk_tgt.sock 00:05:36.332 13:36:02 json_config -- common/autotest_common.sh@829 -- # '[' -z 872732 ']' 00:05:36.332 13:36:02 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.332 13:36:02 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.332 13:36:02 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.332 13:36:02 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.332 13:36:02 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.332 13:36:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.332 [2024-07-15 13:36:02.734991] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:36.332 [2024-07-15 13:36:02.735073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872732 ] 00:05:36.332 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.593 [2024-07-15 13:36:02.987028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.593 [2024-07-15 13:36:03.037198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.163 [2024-07-15 13:36:03.529566] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.163 [2024-07-15 13:36:03.561934] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:37.163 13:36:03 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.163 13:36:03 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:37.163 13:36:03 json_config -- json_config/common.sh@26 -- # echo '' 00:05:37.163 00:05:37.163 13:36:03 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:37.163 13:36:03 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:37.163 INFO: Checking if target configuration is the same... 00:05:37.163 13:36:03 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.163 13:36:03 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:37.163 13:36:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.163 + '[' 2 -ne 2 ']' 00:05:37.163 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:37.163 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:37.163 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:37.163 +++ basename /dev/fd/62 00:05:37.163 ++ mktemp /tmp/62.XXX 00:05:37.163 + tmp_file_1=/tmp/62.m8n 00:05:37.163 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.163 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:37.163 + tmp_file_2=/tmp/spdk_tgt_config.json.jbL 00:05:37.163 + ret=0 00:05:37.163 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:37.423 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:37.685 + diff -u /tmp/62.m8n /tmp/spdk_tgt_config.json.jbL 00:05:37.685 + echo 'INFO: JSON config files are the same' 00:05:37.685 INFO: JSON config files are the same 00:05:37.685 + rm /tmp/62.m8n /tmp/spdk_tgt_config.json.jbL 00:05:37.685 + exit 0 00:05:37.685 13:36:03 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:37.685 13:36:03 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:37.685 INFO: changing configuration and checking if this can be detected... 00:05:37.685 13:36:03 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:37.685 13:36:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:37.685 13:36:04 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.685 13:36:04 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:37.685 13:36:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.685 + '[' 2 -ne 2 ']' 00:05:37.685 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:37.685 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:37.685 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:37.685 +++ basename /dev/fd/62 00:05:37.685 ++ mktemp /tmp/62.XXX 00:05:37.685 + tmp_file_1=/tmp/62.kRM 00:05:37.685 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.685 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:37.685 + tmp_file_2=/tmp/spdk_tgt_config.json.AAL 00:05:37.685 + ret=0 00:05:37.685 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:37.946 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:37.946 + diff -u /tmp/62.kRM /tmp/spdk_tgt_config.json.AAL 00:05:38.206 + ret=1 00:05:38.206 + echo '=== Start of file: /tmp/62.kRM ===' 00:05:38.206 + cat /tmp/62.kRM 00:05:38.206 + echo '=== End of file: /tmp/62.kRM ===' 00:05:38.206 + echo '' 00:05:38.206 + echo '=== Start of file: /tmp/spdk_tgt_config.json.AAL ===' 00:05:38.206 + cat /tmp/spdk_tgt_config.json.AAL 00:05:38.206 + echo '=== End of file: /tmp/spdk_tgt_config.json.AAL ===' 00:05:38.206 + echo '' 00:05:38.206 + rm /tmp/62.kRM /tmp/spdk_tgt_config.json.AAL 00:05:38.206 + exit 1 00:05:38.206 13:36:04 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:38.206 INFO: configuration change detected. 00:05:38.206 13:36:04 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:38.206 13:36:04 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:38.206 13:36:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.206 13:36:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.206 13:36:04 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:38.206 13:36:04 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:38.206 13:36:04 json_config -- json_config/json_config.sh@317 -- # [[ -n 872732 ]] 00:05:38.206 13:36:04 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:38.206 13:36:04 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:38.206 13:36:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.206 13:36:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.206 13:36:04 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:38.206 13:36:04 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:38.206 13:36:04 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:38.206 13:36:04 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:38.206 13:36:04 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:38.206 13:36:04 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:38.206 13:36:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.206 13:36:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.206 13:36:04 json_config -- json_config/json_config.sh@323 -- # killprocess 872732 00:05:38.206 13:36:04 json_config -- common/autotest_common.sh@948 -- # '[' -z 872732 ']' 00:05:38.206 13:36:04 json_config -- common/autotest_common.sh@952 -- # kill -0 872732 00:05:38.206 13:36:04 json_config -- common/autotest_common.sh@953 -- # uname 00:05:38.206 13:36:04 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.206 13:36:04 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 872732 00:05:38.206 13:36:04 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.206 13:36:04 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.207 13:36:04 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 872732' 00:05:38.207 killing process with pid 872732 00:05:38.207 13:36:04 json_config -- common/autotest_common.sh@967 -- # kill 872732 00:05:38.207 13:36:04 json_config -- common/autotest_common.sh@972 -- # wait 872732 00:05:38.468 13:36:04 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.468 13:36:04 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:38.468 13:36:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.468 13:36:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.468 13:36:04 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:38.468 13:36:04 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:38.468 INFO: Success 00:05:38.468 00:05:38.468 real 0m6.882s 00:05:38.468 user 0m8.393s 00:05:38.468 sys 0m1.655s 00:05:38.468 13:36:04 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.468 13:36:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.468 ************************************ 00:05:38.468 END TEST json_config 00:05:38.468 ************************************ 00:05:38.468 13:36:04 -- common/autotest_common.sh@1142 -- # return 0 00:05:38.468 13:36:04 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:38.468 13:36:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.468 13:36:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.468 13:36:04 -- common/autotest_common.sh@10 -- # set +x 00:05:38.731 ************************************ 00:05:38.731 START TEST json_config_extra_key 00:05:38.731 ************************************ 00:05:38.731 13:36:04 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:38.731 13:36:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:38.731 13:36:05 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.731 13:36:05 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.731 13:36:05 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.731 13:36:05 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.731 13:36:05 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.731 13:36:05 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.731 13:36:05 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:38.731 13:36:05 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:38.731 13:36:05 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:38.731 13:36:05 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:38.731 13:36:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:38.731 13:36:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:38.731 13:36:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:38.731 13:36:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:38.731 13:36:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:38.731 13:36:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:38.731 13:36:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:38.731 13:36:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:38.731 13:36:05 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:38.731 13:36:05 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:38.731 INFO: launching applications... 00:05:38.731 13:36:05 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:38.731 13:36:05 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:38.731 13:36:05 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:38.731 13:36:05 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:38.731 13:36:05 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:38.731 13:36:05 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:38.731 13:36:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.731 13:36:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.731 13:36:05 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=873503 00:05:38.731 13:36:05 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:38.731 Waiting for target to run... 00:05:38.731 13:36:05 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 873503 /var/tmp/spdk_tgt.sock 00:05:38.731 13:36:05 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 873503 ']' 00:05:38.731 13:36:05 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:38.731 13:36:05 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:38.731 13:36:05 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.731 13:36:05 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:38.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:38.731 13:36:05 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.731 13:36:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:38.731 [2024-07-15 13:36:05.160812] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:38.731 [2024-07-15 13:36:05.160886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873503 ] 00:05:38.731 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.333 [2024-07-15 13:36:05.572878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.333 [2024-07-15 13:36:05.634709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.595 13:36:05 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.595 13:36:05 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:39.595 13:36:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:39.595 00:05:39.595 13:36:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:39.595 INFO: shutting down applications... 00:05:39.595 13:36:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:39.595 13:36:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:39.595 13:36:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:39.595 13:36:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 873503 ]] 00:05:39.595 13:36:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 873503 00:05:39.595 13:36:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:39.595 13:36:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.595 13:36:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 873503 00:05:39.595 13:36:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:40.169 13:36:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:40.169 13:36:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.169 13:36:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 873503 00:05:40.169 13:36:06 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:40.169 13:36:06 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:40.169 13:36:06 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:40.169 13:36:06 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:40.169 SPDK target shutdown done 00:05:40.169 13:36:06 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:40.169 Success 00:05:40.169 00:05:40.169 real 0m1.453s 00:05:40.169 user 0m0.970s 00:05:40.169 sys 0m0.514s 00:05:40.169 13:36:06 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.169 13:36:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:40.169 ************************************ 00:05:40.169 END TEST json_config_extra_key 00:05:40.169 ************************************ 00:05:40.169 13:36:06 -- common/autotest_common.sh@1142 -- # return 0 00:05:40.169 13:36:06 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.169 13:36:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.169 13:36:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.169 13:36:06 -- common/autotest_common.sh@10 -- # set +x 00:05:40.169 ************************************ 00:05:40.169 START TEST alias_rpc 00:05:40.169 ************************************ 00:05:40.169 13:36:06 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.169 * Looking for test storage... 00:05:40.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:40.169 13:36:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:40.169 13:36:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=873839 00:05:40.169 13:36:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 873839 00:05:40.169 13:36:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.169 13:36:06 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 873839 ']' 00:05:40.169 13:36:06 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.169 13:36:06 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.169 13:36:06 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.169 13:36:06 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.169 13:36:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.169 [2024-07-15 13:36:06.690256] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:40.169 [2024-07-15 13:36:06.690327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873839 ] 00:05:40.430 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.430 [2024-07-15 13:36:06.756448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.430 [2024-07-15 13:36:06.829856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.001 13:36:07 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.001 13:36:07 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:41.001 13:36:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:41.262 13:36:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 873839 00:05:41.262 13:36:07 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 873839 ']' 00:05:41.262 13:36:07 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 873839 00:05:41.262 13:36:07 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:41.262 13:36:07 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.262 13:36:07 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 873839 00:05:41.262 13:36:07 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:41.262 13:36:07 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:41.262 13:36:07 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 873839' 00:05:41.262 killing process with pid 873839 00:05:41.262 13:36:07 alias_rpc -- common/autotest_common.sh@967 -- # kill 873839 00:05:41.262 13:36:07 alias_rpc -- common/autotest_common.sh@972 -- # wait 873839 00:05:41.523 00:05:41.523 real 0m1.393s 00:05:41.523 user 0m1.536s 00:05:41.523 sys 0m0.387s 00:05:41.523 13:36:07 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.523 13:36:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.523 ************************************ 00:05:41.523 END TEST alias_rpc 00:05:41.523 ************************************ 00:05:41.523 13:36:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:41.523 13:36:07 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:41.523 13:36:07 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:41.523 13:36:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.523 13:36:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.523 13:36:07 -- common/autotest_common.sh@10 -- # set +x 00:05:41.523 ************************************ 00:05:41.523 START TEST spdkcli_tcp 00:05:41.523 ************************************ 00:05:41.523 13:36:07 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:41.784 * Looking for test storage... 00:05:41.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:41.784 13:36:08 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:41.784 13:36:08 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:41.784 13:36:08 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:41.784 13:36:08 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:41.784 13:36:08 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:41.784 13:36:08 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:41.784 13:36:08 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:41.784 13:36:08 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:41.784 13:36:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.784 13:36:08 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=874123 00:05:41.784 13:36:08 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 874123 00:05:41.784 13:36:08 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:41.784 13:36:08 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 874123 ']' 00:05:41.784 13:36:08 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.784 13:36:08 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.784 13:36:08 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.784 13:36:08 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.784 13:36:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.784 [2024-07-15 13:36:08.163782] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:41.784 [2024-07-15 13:36:08.163855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid874123 ] 00:05:41.784 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.784 [2024-07-15 13:36:08.229842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.784 [2024-07-15 13:36:08.307640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.784 [2024-07-15 13:36:08.307642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.728 13:36:08 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.728 13:36:08 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:42.728 13:36:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=874289 00:05:42.728 13:36:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:42.728 13:36:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:42.728 [ 00:05:42.728 "bdev_malloc_delete", 00:05:42.728 "bdev_malloc_create", 00:05:42.728 "bdev_null_resize", 00:05:42.728 "bdev_null_delete", 00:05:42.728 "bdev_null_create", 00:05:42.728 "bdev_nvme_cuse_unregister", 00:05:42.728 "bdev_nvme_cuse_register", 00:05:42.728 "bdev_opal_new_user", 00:05:42.728 "bdev_opal_set_lock_state", 00:05:42.728 "bdev_opal_delete", 00:05:42.728 "bdev_opal_get_info", 00:05:42.728 "bdev_opal_create", 00:05:42.728 "bdev_nvme_opal_revert", 00:05:42.728 "bdev_nvme_opal_init", 00:05:42.728 "bdev_nvme_send_cmd", 00:05:42.728 "bdev_nvme_get_path_iostat", 00:05:42.728 "bdev_nvme_get_mdns_discovery_info", 00:05:42.728 "bdev_nvme_stop_mdns_discovery", 00:05:42.728 "bdev_nvme_start_mdns_discovery", 00:05:42.728 "bdev_nvme_set_multipath_policy", 00:05:42.728 "bdev_nvme_set_preferred_path", 00:05:42.728 "bdev_nvme_get_io_paths", 00:05:42.728 "bdev_nvme_remove_error_injection", 00:05:42.728 "bdev_nvme_add_error_injection", 00:05:42.728 "bdev_nvme_get_discovery_info", 00:05:42.728 "bdev_nvme_stop_discovery", 00:05:42.728 "bdev_nvme_start_discovery", 00:05:42.728 "bdev_nvme_get_controller_health_info", 00:05:42.728 "bdev_nvme_disable_controller", 00:05:42.728 "bdev_nvme_enable_controller", 00:05:42.728 "bdev_nvme_reset_controller", 00:05:42.728 "bdev_nvme_get_transport_statistics", 00:05:42.728 "bdev_nvme_apply_firmware", 00:05:42.728 "bdev_nvme_detach_controller", 00:05:42.728 "bdev_nvme_get_controllers", 00:05:42.728 "bdev_nvme_attach_controller", 00:05:42.728 "bdev_nvme_set_hotplug", 00:05:42.728 "bdev_nvme_set_options", 00:05:42.728 "bdev_passthru_delete", 00:05:42.728 "bdev_passthru_create", 00:05:42.728 "bdev_lvol_set_parent_bdev", 00:05:42.728 "bdev_lvol_set_parent", 00:05:42.728 "bdev_lvol_check_shallow_copy", 00:05:42.728 "bdev_lvol_start_shallow_copy", 00:05:42.728 "bdev_lvol_grow_lvstore", 00:05:42.728 "bdev_lvol_get_lvols", 00:05:42.728 "bdev_lvol_get_lvstores", 00:05:42.728 "bdev_lvol_delete", 00:05:42.728 "bdev_lvol_set_read_only", 00:05:42.728 "bdev_lvol_resize", 00:05:42.728 "bdev_lvol_decouple_parent", 00:05:42.728 "bdev_lvol_inflate", 00:05:42.728 "bdev_lvol_rename", 00:05:42.728 "bdev_lvol_clone_bdev", 00:05:42.728 "bdev_lvol_clone", 00:05:42.728 "bdev_lvol_snapshot", 00:05:42.728 "bdev_lvol_create", 00:05:42.728 "bdev_lvol_delete_lvstore", 00:05:42.728 "bdev_lvol_rename_lvstore", 00:05:42.728 "bdev_lvol_create_lvstore", 00:05:42.728 "bdev_raid_set_options", 00:05:42.728 "bdev_raid_remove_base_bdev", 00:05:42.728 "bdev_raid_add_base_bdev", 00:05:42.728 "bdev_raid_delete", 00:05:42.728 "bdev_raid_create", 00:05:42.728 "bdev_raid_get_bdevs", 00:05:42.728 "bdev_error_inject_error", 00:05:42.728 "bdev_error_delete", 00:05:42.728 "bdev_error_create", 00:05:42.728 "bdev_split_delete", 00:05:42.728 "bdev_split_create", 00:05:42.728 "bdev_delay_delete", 00:05:42.728 "bdev_delay_create", 00:05:42.728 "bdev_delay_update_latency", 00:05:42.728 "bdev_zone_block_delete", 00:05:42.728 "bdev_zone_block_create", 00:05:42.728 "blobfs_create", 00:05:42.728 "blobfs_detect", 00:05:42.728 "blobfs_set_cache_size", 00:05:42.728 "bdev_aio_delete", 00:05:42.728 "bdev_aio_rescan", 00:05:42.728 "bdev_aio_create", 00:05:42.728 "bdev_ftl_set_property", 00:05:42.728 "bdev_ftl_get_properties", 00:05:42.728 "bdev_ftl_get_stats", 00:05:42.728 "bdev_ftl_unmap", 00:05:42.728 "bdev_ftl_unload", 00:05:42.728 "bdev_ftl_delete", 00:05:42.728 "bdev_ftl_load", 00:05:42.728 "bdev_ftl_create", 00:05:42.728 "bdev_virtio_attach_controller", 00:05:42.728 "bdev_virtio_scsi_get_devices", 00:05:42.728 "bdev_virtio_detach_controller", 00:05:42.728 "bdev_virtio_blk_set_hotplug", 00:05:42.728 "bdev_iscsi_delete", 00:05:42.728 "bdev_iscsi_create", 00:05:42.728 "bdev_iscsi_set_options", 00:05:42.728 "accel_error_inject_error", 00:05:42.728 "ioat_scan_accel_module", 00:05:42.728 "dsa_scan_accel_module", 00:05:42.728 "iaa_scan_accel_module", 00:05:42.728 "vfu_virtio_create_scsi_endpoint", 00:05:42.728 "vfu_virtio_scsi_remove_target", 00:05:42.728 "vfu_virtio_scsi_add_target", 00:05:42.728 "vfu_virtio_create_blk_endpoint", 00:05:42.728 "vfu_virtio_delete_endpoint", 00:05:42.728 "keyring_file_remove_key", 00:05:42.728 "keyring_file_add_key", 00:05:42.728 "keyring_linux_set_options", 00:05:42.728 "iscsi_get_histogram", 00:05:42.728 "iscsi_enable_histogram", 00:05:42.728 "iscsi_set_options", 00:05:42.728 "iscsi_get_auth_groups", 00:05:42.728 "iscsi_auth_group_remove_secret", 00:05:42.728 "iscsi_auth_group_add_secret", 00:05:42.728 "iscsi_delete_auth_group", 00:05:42.728 "iscsi_create_auth_group", 00:05:42.728 "iscsi_set_discovery_auth", 00:05:42.728 "iscsi_get_options", 00:05:42.728 "iscsi_target_node_request_logout", 00:05:42.728 "iscsi_target_node_set_redirect", 00:05:42.728 "iscsi_target_node_set_auth", 00:05:42.728 "iscsi_target_node_add_lun", 00:05:42.728 "iscsi_get_stats", 00:05:42.728 "iscsi_get_connections", 00:05:42.728 "iscsi_portal_group_set_auth", 00:05:42.728 "iscsi_start_portal_group", 00:05:42.728 "iscsi_delete_portal_group", 00:05:42.728 "iscsi_create_portal_group", 00:05:42.728 "iscsi_get_portal_groups", 00:05:42.728 "iscsi_delete_target_node", 00:05:42.728 "iscsi_target_node_remove_pg_ig_maps", 00:05:42.728 "iscsi_target_node_add_pg_ig_maps", 00:05:42.728 "iscsi_create_target_node", 00:05:42.728 "iscsi_get_target_nodes", 00:05:42.728 "iscsi_delete_initiator_group", 00:05:42.728 "iscsi_initiator_group_remove_initiators", 00:05:42.728 "iscsi_initiator_group_add_initiators", 00:05:42.728 "iscsi_create_initiator_group", 00:05:42.728 "iscsi_get_initiator_groups", 00:05:42.728 "nvmf_set_crdt", 00:05:42.728 "nvmf_set_config", 00:05:42.728 "nvmf_set_max_subsystems", 00:05:42.728 "nvmf_stop_mdns_prr", 00:05:42.728 "nvmf_publish_mdns_prr", 00:05:42.728 "nvmf_subsystem_get_listeners", 00:05:42.728 "nvmf_subsystem_get_qpairs", 00:05:42.728 "nvmf_subsystem_get_controllers", 00:05:42.728 "nvmf_get_stats", 00:05:42.728 "nvmf_get_transports", 00:05:42.728 "nvmf_create_transport", 00:05:42.728 "nvmf_get_targets", 00:05:42.728 "nvmf_delete_target", 00:05:42.728 "nvmf_create_target", 00:05:42.728 "nvmf_subsystem_allow_any_host", 00:05:42.728 "nvmf_subsystem_remove_host", 00:05:42.728 "nvmf_subsystem_add_host", 00:05:42.728 "nvmf_ns_remove_host", 00:05:42.728 "nvmf_ns_add_host", 00:05:42.728 "nvmf_subsystem_remove_ns", 00:05:42.728 "nvmf_subsystem_add_ns", 00:05:42.728 "nvmf_subsystem_listener_set_ana_state", 00:05:42.728 "nvmf_discovery_get_referrals", 00:05:42.728 "nvmf_discovery_remove_referral", 00:05:42.728 "nvmf_discovery_add_referral", 00:05:42.728 "nvmf_subsystem_remove_listener", 00:05:42.728 "nvmf_subsystem_add_listener", 00:05:42.728 "nvmf_delete_subsystem", 00:05:42.728 "nvmf_create_subsystem", 00:05:42.728 "nvmf_get_subsystems", 00:05:42.728 "env_dpdk_get_mem_stats", 00:05:42.728 "nbd_get_disks", 00:05:42.728 "nbd_stop_disk", 00:05:42.728 "nbd_start_disk", 00:05:42.728 "ublk_recover_disk", 00:05:42.728 "ublk_get_disks", 00:05:42.728 "ublk_stop_disk", 00:05:42.728 "ublk_start_disk", 00:05:42.728 "ublk_destroy_target", 00:05:42.728 "ublk_create_target", 00:05:42.728 "virtio_blk_create_transport", 00:05:42.728 "virtio_blk_get_transports", 00:05:42.728 "vhost_controller_set_coalescing", 00:05:42.728 "vhost_get_controllers", 00:05:42.728 "vhost_delete_controller", 00:05:42.728 "vhost_create_blk_controller", 00:05:42.728 "vhost_scsi_controller_remove_target", 00:05:42.728 "vhost_scsi_controller_add_target", 00:05:42.728 "vhost_start_scsi_controller", 00:05:42.728 "vhost_create_scsi_controller", 00:05:42.728 "thread_set_cpumask", 00:05:42.728 "framework_get_governor", 00:05:42.728 "framework_get_scheduler", 00:05:42.728 "framework_set_scheduler", 00:05:42.728 "framework_get_reactors", 00:05:42.728 "thread_get_io_channels", 00:05:42.728 "thread_get_pollers", 00:05:42.728 "thread_get_stats", 00:05:42.728 "framework_monitor_context_switch", 00:05:42.728 "spdk_kill_instance", 00:05:42.728 "log_enable_timestamps", 00:05:42.728 "log_get_flags", 00:05:42.728 "log_clear_flag", 00:05:42.728 "log_set_flag", 00:05:42.728 "log_get_level", 00:05:42.728 "log_set_level", 00:05:42.728 "log_get_print_level", 00:05:42.729 "log_set_print_level", 00:05:42.729 "framework_enable_cpumask_locks", 00:05:42.729 "framework_disable_cpumask_locks", 00:05:42.729 "framework_wait_init", 00:05:42.729 "framework_start_init", 00:05:42.729 "scsi_get_devices", 00:05:42.729 "bdev_get_histogram", 00:05:42.729 "bdev_enable_histogram", 00:05:42.729 "bdev_set_qos_limit", 00:05:42.729 "bdev_set_qd_sampling_period", 00:05:42.729 "bdev_get_bdevs", 00:05:42.729 "bdev_reset_iostat", 00:05:42.729 "bdev_get_iostat", 00:05:42.729 "bdev_examine", 00:05:42.729 "bdev_wait_for_examine", 00:05:42.729 "bdev_set_options", 00:05:42.729 "notify_get_notifications", 00:05:42.729 "notify_get_types", 00:05:42.729 "accel_get_stats", 00:05:42.729 "accel_set_options", 00:05:42.729 "accel_set_driver", 00:05:42.729 "accel_crypto_key_destroy", 00:05:42.729 "accel_crypto_keys_get", 00:05:42.729 "accel_crypto_key_create", 00:05:42.729 "accel_assign_opc", 00:05:42.729 "accel_get_module_info", 00:05:42.729 "accel_get_opc_assignments", 00:05:42.729 "vmd_rescan", 00:05:42.729 "vmd_remove_device", 00:05:42.729 "vmd_enable", 00:05:42.729 "sock_get_default_impl", 00:05:42.729 "sock_set_default_impl", 00:05:42.729 "sock_impl_set_options", 00:05:42.729 "sock_impl_get_options", 00:05:42.729 "iobuf_get_stats", 00:05:42.729 "iobuf_set_options", 00:05:42.729 "keyring_get_keys", 00:05:42.729 "framework_get_pci_devices", 00:05:42.729 "framework_get_config", 00:05:42.729 "framework_get_subsystems", 00:05:42.729 "vfu_tgt_set_base_path", 00:05:42.729 "trace_get_info", 00:05:42.729 "trace_get_tpoint_group_mask", 00:05:42.729 "trace_disable_tpoint_group", 00:05:42.729 "trace_enable_tpoint_group", 00:05:42.729 "trace_clear_tpoint_mask", 00:05:42.729 "trace_set_tpoint_mask", 00:05:42.729 "spdk_get_version", 00:05:42.729 "rpc_get_methods" 00:05:42.729 ] 00:05:42.729 13:36:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:42.729 13:36:09 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:42.729 13:36:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.729 13:36:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:42.729 13:36:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 874123 00:05:42.729 13:36:09 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 874123 ']' 00:05:42.729 13:36:09 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 874123 00:05:42.729 13:36:09 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:42.729 13:36:09 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.729 13:36:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 874123 00:05:42.729 13:36:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.729 13:36:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.729 13:36:09 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 874123' 00:05:42.729 killing process with pid 874123 00:05:42.729 13:36:09 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 874123 00:05:42.729 13:36:09 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 874123 00:05:42.990 00:05:42.990 real 0m1.406s 00:05:42.990 user 0m2.543s 00:05:42.990 sys 0m0.444s 00:05:42.990 13:36:09 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.990 13:36:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.990 ************************************ 00:05:42.990 END TEST spdkcli_tcp 00:05:42.990 ************************************ 00:05:42.990 13:36:09 -- common/autotest_common.sh@1142 -- # return 0 00:05:42.990 13:36:09 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:42.990 13:36:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.990 13:36:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.990 13:36:09 -- common/autotest_common.sh@10 -- # set +x 00:05:42.990 ************************************ 00:05:42.990 START TEST dpdk_mem_utility 00:05:42.990 ************************************ 00:05:42.990 13:36:09 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.251 * Looking for test storage... 00:05:43.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:43.251 13:36:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:43.251 13:36:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=874602 00:05:43.251 13:36:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 874602 00:05:43.251 13:36:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.251 13:36:09 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 874602 ']' 00:05:43.251 13:36:09 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.251 13:36:09 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.251 13:36:09 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.251 13:36:09 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.251 13:36:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:43.251 [2024-07-15 13:36:09.641754] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:43.251 [2024-07-15 13:36:09.641823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid874602 ] 00:05:43.251 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.251 [2024-07-15 13:36:09.707424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.511 [2024-07-15 13:36:09.782721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.081 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.081 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:44.081 13:36:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:44.081 13:36:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:44.081 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.081 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.081 { 00:05:44.081 "filename": "/tmp/spdk_mem_dump.txt" 00:05:44.081 } 00:05:44.081 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.081 13:36:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:44.081 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:44.081 1 heaps totaling size 814.000000 MiB 00:05:44.081 size: 814.000000 MiB heap id: 0 00:05:44.081 end heaps---------- 00:05:44.081 8 mempools totaling size 598.116089 MiB 00:05:44.081 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:44.081 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:44.081 size: 84.521057 MiB name: bdev_io_874602 00:05:44.081 size: 51.011292 MiB name: evtpool_874602 00:05:44.081 size: 50.003479 MiB name: msgpool_874602 00:05:44.081 size: 21.763794 MiB name: PDU_Pool 00:05:44.081 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:44.081 size: 0.026123 MiB name: Session_Pool 00:05:44.081 end mempools------- 00:05:44.081 6 memzones totaling size 4.142822 MiB 00:05:44.081 size: 1.000366 MiB name: RG_ring_0_874602 00:05:44.081 size: 1.000366 MiB name: RG_ring_1_874602 00:05:44.081 size: 1.000366 MiB name: RG_ring_4_874602 00:05:44.081 size: 1.000366 MiB name: RG_ring_5_874602 00:05:44.081 size: 0.125366 MiB name: RG_ring_2_874602 00:05:44.081 size: 0.015991 MiB name: RG_ring_3_874602 00:05:44.081 end memzones------- 00:05:44.081 13:36:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:44.081 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:44.081 list of free elements. size: 12.519348 MiB 00:05:44.081 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:44.081 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:44.081 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:44.081 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:44.081 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:44.081 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:44.081 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:44.081 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:44.081 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:44.081 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:44.081 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:44.081 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:44.081 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:44.081 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:44.081 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:44.081 list of standard malloc elements. size: 199.218079 MiB 00:05:44.081 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:44.081 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:44.081 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:44.081 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:44.081 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:44.081 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:44.081 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:44.081 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:44.081 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:44.081 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:44.081 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:44.081 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:44.081 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:44.082 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:44.082 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:44.082 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:44.082 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:44.082 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:44.082 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:44.082 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:44.082 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:44.082 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:44.082 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:44.082 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:44.082 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:44.082 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:44.082 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:44.082 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:44.082 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:44.082 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:44.082 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:44.082 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:44.082 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:44.082 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:44.082 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:44.082 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:44.082 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:44.082 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:44.082 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:44.082 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:44.082 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:44.082 list of memzone associated elements. size: 602.262573 MiB 00:05:44.082 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:44.082 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:44.082 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:44.082 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:44.082 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:44.082 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_874602_0 00:05:44.082 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:44.082 associated memzone info: size: 48.002930 MiB name: MP_evtpool_874602_0 00:05:44.082 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:44.082 associated memzone info: size: 48.002930 MiB name: MP_msgpool_874602_0 00:05:44.082 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:44.082 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:44.082 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:44.082 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:44.082 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:44.082 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_874602 00:05:44.082 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:44.082 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_874602 00:05:44.082 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:44.082 associated memzone info: size: 1.007996 MiB name: MP_evtpool_874602 00:05:44.082 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:44.082 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:44.082 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:44.082 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:44.082 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:44.082 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:44.082 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:44.082 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:44.082 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:44.082 associated memzone info: size: 1.000366 MiB name: RG_ring_0_874602 00:05:44.082 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:44.082 associated memzone info: size: 1.000366 MiB name: RG_ring_1_874602 00:05:44.082 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:44.082 associated memzone info: size: 1.000366 MiB name: RG_ring_4_874602 00:05:44.082 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:44.082 associated memzone info: size: 1.000366 MiB name: RG_ring_5_874602 00:05:44.082 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:44.082 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_874602 00:05:44.082 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:44.082 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:44.082 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:44.082 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:44.082 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:44.082 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:44.082 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:44.082 associated memzone info: size: 0.125366 MiB name: RG_ring_2_874602 00:05:44.082 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:44.082 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:44.082 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:44.082 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:44.082 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:44.082 associated memzone info: size: 0.015991 MiB name: RG_ring_3_874602 00:05:44.082 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:44.082 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:44.082 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:44.082 associated memzone info: size: 0.000183 MiB name: MP_msgpool_874602 00:05:44.082 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:44.082 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_874602 00:05:44.082 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:44.082 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:44.082 13:36:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:44.082 13:36:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 874602 00:05:44.082 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 874602 ']' 00:05:44.082 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 874602 00:05:44.082 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:44.082 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.082 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 874602 00:05:44.082 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.082 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.082 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 874602' 00:05:44.082 killing process with pid 874602 00:05:44.082 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 874602 00:05:44.082 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 874602 00:05:44.344 00:05:44.344 real 0m1.314s 00:05:44.344 user 0m1.392s 00:05:44.344 sys 0m0.378s 00:05:44.344 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.344 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.344 ************************************ 00:05:44.344 END TEST dpdk_mem_utility 00:05:44.344 ************************************ 00:05:44.344 13:36:10 -- common/autotest_common.sh@1142 -- # return 0 00:05:44.344 13:36:10 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:44.344 13:36:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.344 13:36:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.344 13:36:10 -- common/autotest_common.sh@10 -- # set +x 00:05:44.344 ************************************ 00:05:44.344 START TEST event 00:05:44.344 ************************************ 00:05:44.344 13:36:10 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:44.605 * Looking for test storage... 00:05:44.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:44.605 13:36:10 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:44.605 13:36:10 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:44.605 13:36:10 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:44.605 13:36:10 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:44.605 13:36:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.605 13:36:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.605 ************************************ 00:05:44.605 START TEST event_perf 00:05:44.605 ************************************ 00:05:44.605 13:36:11 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:44.605 Running I/O for 1 seconds...[2024-07-15 13:36:11.015721] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:44.605 [2024-07-15 13:36:11.015784] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875134 ] 00:05:44.605 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.605 [2024-07-15 13:36:11.078471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.866 [2024-07-15 13:36:11.152475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.866 [2024-07-15 13:36:11.152608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.866 [2024-07-15 13:36:11.152751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.866 Running I/O for 1 seconds...[2024-07-15 13:36:11.152751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.808 00:05:45.808 lcore 0: 175057 00:05:45.808 lcore 1: 175056 00:05:45.808 lcore 2: 175057 00:05:45.808 lcore 3: 175059 00:05:45.808 done. 00:05:45.808 00:05:45.808 real 0m1.201s 00:05:45.808 user 0m4.133s 00:05:45.808 sys 0m0.064s 00:05:45.808 13:36:12 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.808 13:36:12 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:45.808 ************************************ 00:05:45.808 END TEST event_perf 00:05:45.808 ************************************ 00:05:45.808 13:36:12 event -- common/autotest_common.sh@1142 -- # return 0 00:05:45.808 13:36:12 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:45.808 13:36:12 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:45.808 13:36:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.808 13:36:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.808 ************************************ 00:05:45.808 START TEST event_reactor 00:05:45.808 ************************************ 00:05:45.808 13:36:12 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:45.808 [2024-07-15 13:36:12.302573] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:45.808 [2024-07-15 13:36:12.302641] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875570 ] 00:05:45.808 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.068 [2024-07-15 13:36:12.364306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.068 [2024-07-15 13:36:12.432006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.007 test_start 00:05:47.007 oneshot 00:05:47.007 tick 100 00:05:47.007 tick 100 00:05:47.007 tick 250 00:05:47.007 tick 100 00:05:47.007 tick 100 00:05:47.007 tick 100 00:05:47.007 tick 250 00:05:47.007 tick 500 00:05:47.007 tick 100 00:05:47.007 tick 100 00:05:47.007 tick 250 00:05:47.007 tick 100 00:05:47.007 tick 100 00:05:47.007 test_end 00:05:47.007 00:05:47.007 real 0m1.203s 00:05:47.007 user 0m1.127s 00:05:47.007 sys 0m0.072s 00:05:47.007 13:36:13 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.007 13:36:13 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:47.007 ************************************ 00:05:47.007 END TEST event_reactor 00:05:47.007 ************************************ 00:05:47.007 13:36:13 event -- common/autotest_common.sh@1142 -- # return 0 00:05:47.007 13:36:13 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:47.007 13:36:13 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:47.007 13:36:13 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.007 13:36:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.266 ************************************ 00:05:47.266 START TEST event_reactor_perf 00:05:47.266 ************************************ 00:05:47.266 13:36:13 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:47.266 [2024-07-15 13:36:13.584248] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:47.266 [2024-07-15 13:36:13.584342] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875918 ] 00:05:47.266 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.266 [2024-07-15 13:36:13.647844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.266 [2024-07-15 13:36:13.714070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.651 test_start 00:05:48.651 test_end 00:05:48.651 Performance: 369283 events per second 00:05:48.651 00:05:48.651 real 0m1.204s 00:05:48.651 user 0m1.130s 00:05:48.651 sys 0m0.070s 00:05:48.651 13:36:14 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.651 13:36:14 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.651 ************************************ 00:05:48.651 END TEST event_reactor_perf 00:05:48.651 ************************************ 00:05:48.651 13:36:14 event -- common/autotest_common.sh@1142 -- # return 0 00:05:48.651 13:36:14 event -- event/event.sh@49 -- # uname -s 00:05:48.651 13:36:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:48.651 13:36:14 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:48.651 13:36:14 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.651 13:36:14 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.651 13:36:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.651 ************************************ 00:05:48.651 START TEST event_scheduler 00:05:48.651 ************************************ 00:05:48.651 13:36:14 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:48.651 * Looking for test storage... 00:05:48.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:48.651 13:36:14 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:48.651 13:36:14 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=876175 00:05:48.651 13:36:14 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.651 13:36:14 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:48.651 13:36:14 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 876175 00:05:48.651 13:36:14 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 876175 ']' 00:05:48.651 13:36:14 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.651 13:36:14 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.651 13:36:14 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.651 13:36:14 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.651 13:36:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.651 [2024-07-15 13:36:14.979427] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:48.651 [2024-07-15 13:36:14.979503] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid876175 ] 00:05:48.651 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.651 [2024-07-15 13:36:15.034654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.651 [2024-07-15 13:36:15.101743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.651 [2024-07-15 13:36:15.101900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.651 [2024-07-15 13:36:15.102058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.651 [2024-07-15 13:36:15.102060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.592 13:36:15 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.592 13:36:15 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:49.592 13:36:15 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:49.592 13:36:15 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.592 13:36:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.592 [2024-07-15 13:36:15.772173] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:49.592 [2024-07-15 13:36:15.772186] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:49.592 [2024-07-15 13:36:15.772194] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:49.592 [2024-07-15 13:36:15.772198] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:49.592 [2024-07-15 13:36:15.772201] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:49.592 13:36:15 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.592 13:36:15 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:49.592 13:36:15 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.592 13:36:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.592 [2024-07-15 13:36:15.826460] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:49.592 13:36:15 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.592 13:36:15 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:49.592 13:36:15 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.592 13:36:15 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.592 13:36:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.592 ************************************ 00:05:49.592 START TEST scheduler_create_thread 00:05:49.592 ************************************ 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.592 2 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.592 3 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.592 4 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.592 5 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.592 6 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.592 7 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.592 8 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.592 9 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.592 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.163 10 00:05:50.163 13:36:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.163 13:36:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:50.163 13:36:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.163 13:36:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.548 13:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.548 13:36:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:51.548 13:36:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:51.548 13:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.548 13:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.120 13:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.120 13:36:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:52.120 13:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.120 13:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.061 13:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.061 13:36:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:53.061 13:36:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:53.061 13:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.061 13:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.632 13:36:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.632 00:05:53.632 real 0m4.222s 00:05:53.632 user 0m0.028s 00:05:53.632 sys 0m0.003s 00:05:53.632 13:36:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.632 13:36:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.632 ************************************ 00:05:53.632 END TEST scheduler_create_thread 00:05:53.632 ************************************ 00:05:53.632 13:36:20 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:53.632 13:36:20 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:53.632 13:36:20 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 876175 00:05:53.632 13:36:20 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 876175 ']' 00:05:53.632 13:36:20 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 876175 00:05:53.632 13:36:20 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:53.632 13:36:20 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.632 13:36:20 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 876175 00:05:53.891 13:36:20 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:53.891 13:36:20 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:53.891 13:36:20 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 876175' 00:05:53.891 killing process with pid 876175 00:05:53.891 13:36:20 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 876175 00:05:53.891 13:36:20 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 876175 00:05:53.891 [2024-07-15 13:36:20.363658] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:54.152 00:05:54.152 real 0m5.704s 00:05:54.152 user 0m12.744s 00:05:54.152 sys 0m0.370s 00:05:54.152 13:36:20 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.152 13:36:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.152 ************************************ 00:05:54.152 END TEST event_scheduler 00:05:54.152 ************************************ 00:05:54.152 13:36:20 event -- common/autotest_common.sh@1142 -- # return 0 00:05:54.152 13:36:20 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:54.152 13:36:20 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:54.152 13:36:20 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.152 13:36:20 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.152 13:36:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.152 ************************************ 00:05:54.152 START TEST app_repeat 00:05:54.152 ************************************ 00:05:54.152 13:36:20 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:54.152 13:36:20 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.152 13:36:20 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.152 13:36:20 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:54.152 13:36:20 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.152 13:36:20 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:54.152 13:36:20 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:54.152 13:36:20 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:54.152 13:36:20 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:54.152 13:36:20 event.app_repeat -- event/event.sh@19 -- # repeat_pid=877368 00:05:54.152 13:36:20 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.152 13:36:20 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 877368' 00:05:54.152 Process app_repeat pid: 877368 00:05:54.152 13:36:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.152 13:36:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:54.152 spdk_app_start Round 0 00:05:54.152 13:36:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 877368 /var/tmp/spdk-nbd.sock 00:05:54.152 13:36:20 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 877368 ']' 00:05:54.152 13:36:20 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.152 13:36:20 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.152 13:36:20 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.152 13:36:20 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.152 13:36:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.152 [2024-07-15 13:36:20.636667] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:54.152 [2024-07-15 13:36:20.636713] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid877368 ] 00:05:54.152 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.412 [2024-07-15 13:36:20.693534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.412 [2024-07-15 13:36:20.758809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.412 [2024-07-15 13:36:20.758812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.412 13:36:20 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.412 13:36:20 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:54.412 13:36:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.672 Malloc0 00:05:54.672 13:36:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.672 Malloc1 00:05:54.672 13:36:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.672 13:36:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.672 13:36:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.672 13:36:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.672 13:36:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.933 13:36:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.933 13:36:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.933 13:36:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.933 13:36:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.933 13:36:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.933 13:36:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.933 13:36:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.933 13:36:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.933 13:36:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.933 13:36:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.933 13:36:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.933 /dev/nbd0 00:05:54.933 13:36:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.933 13:36:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.933 13:36:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:54.933 13:36:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:54.933 13:36:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:54.933 13:36:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:54.933 13:36:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:54.933 13:36:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:54.933 13:36:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:54.933 13:36:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:54.933 13:36:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.933 1+0 records in 00:05:54.933 1+0 records out 00:05:54.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 8.6661e-05 s, 47.3 MB/s 00:05:54.933 13:36:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.933 13:36:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:54.933 13:36:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.933 13:36:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:54.933 13:36:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:54.933 13:36:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.933 13:36:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.933 13:36:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.193 /dev/nbd1 00:05:55.193 13:36:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.193 13:36:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.193 13:36:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:55.193 13:36:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:55.193 13:36:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:55.193 13:36:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:55.193 13:36:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:55.193 13:36:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:55.193 13:36:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:55.193 13:36:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:55.193 13:36:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.193 1+0 records in 00:05:55.193 1+0 records out 00:05:55.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223678 s, 18.3 MB/s 00:05:55.193 13:36:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.193 13:36:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:55.193 13:36:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.193 13:36:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:55.193 13:36:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:55.193 13:36:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.193 13:36:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.193 13:36:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.193 13:36:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.193 13:36:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.455 { 00:05:55.455 "nbd_device": "/dev/nbd0", 00:05:55.455 "bdev_name": "Malloc0" 00:05:55.455 }, 00:05:55.455 { 00:05:55.455 "nbd_device": "/dev/nbd1", 00:05:55.455 "bdev_name": "Malloc1" 00:05:55.455 } 00:05:55.455 ]' 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.455 { 00:05:55.455 "nbd_device": "/dev/nbd0", 00:05:55.455 "bdev_name": "Malloc0" 00:05:55.455 }, 00:05:55.455 { 00:05:55.455 "nbd_device": "/dev/nbd1", 00:05:55.455 "bdev_name": "Malloc1" 00:05:55.455 } 00:05:55.455 ]' 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.455 /dev/nbd1' 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.455 /dev/nbd1' 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.455 256+0 records in 00:05:55.455 256+0 records out 00:05:55.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012473 s, 84.1 MB/s 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.455 256+0 records in 00:05:55.455 256+0 records out 00:05:55.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165006 s, 63.5 MB/s 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.455 256+0 records in 00:05:55.455 256+0 records out 00:05:55.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166829 s, 62.9 MB/s 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.455 13:36:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.716 13:36:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.977 13:36:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.977 13:36:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.977 13:36:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.977 13:36:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.977 13:36:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.977 13:36:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.977 13:36:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.977 13:36:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.977 13:36:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.977 13:36:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.977 13:36:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.977 13:36:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.977 13:36:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.237 13:36:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.237 [2024-07-15 13:36:22.728679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.497 [2024-07-15 13:36:22.791178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.497 [2024-07-15 13:36:22.791196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.497 [2024-07-15 13:36:22.822608] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.497 [2024-07-15 13:36:22.822641] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.078 13:36:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.078 13:36:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:59.078 spdk_app_start Round 1 00:05:59.078 13:36:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 877368 /var/tmp/spdk-nbd.sock 00:05:59.078 13:36:25 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 877368 ']' 00:05:59.078 13:36:25 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.078 13:36:25 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.078 13:36:25 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.078 13:36:25 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.078 13:36:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.339 13:36:25 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.339 13:36:25 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:59.339 13:36:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.599 Malloc0 00:05:59.599 13:36:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.599 Malloc1 00:05:59.599 13:36:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.599 13:36:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.599 13:36:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.599 13:36:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.599 13:36:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.599 13:36:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.599 13:36:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.599 13:36:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.599 13:36:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.599 13:36:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.599 13:36:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.599 13:36:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.599 13:36:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.599 13:36:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.599 13:36:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.599 13:36:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.859 /dev/nbd0 00:05:59.859 13:36:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.859 13:36:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.859 13:36:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:59.859 13:36:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:59.859 13:36:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:59.859 13:36:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:59.859 13:36:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:59.859 13:36:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:59.859 13:36:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:59.859 13:36:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:59.859 13:36:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.859 1+0 records in 00:05:59.859 1+0 records out 00:05:59.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210699 s, 19.4 MB/s 00:05:59.859 13:36:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.859 13:36:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:59.859 13:36:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.859 13:36:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:59.859 13:36:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:59.859 13:36:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.859 13:36:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.859 13:36:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.119 /dev/nbd1 00:06:00.119 13:36:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.119 13:36:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.119 13:36:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:00.119 13:36:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:00.119 13:36:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:00.119 13:36:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:00.119 13:36:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:00.119 13:36:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:00.119 13:36:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:00.119 13:36:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:00.119 13:36:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.119 1+0 records in 00:06:00.119 1+0 records out 00:06:00.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276464 s, 14.8 MB/s 00:06:00.119 13:36:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.119 13:36:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:00.119 13:36:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.119 13:36:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:00.119 13:36:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:00.119 13:36:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.119 13:36:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.119 13:36:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.119 13:36:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.119 13:36:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.119 13:36:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.119 { 00:06:00.119 "nbd_device": "/dev/nbd0", 00:06:00.119 "bdev_name": "Malloc0" 00:06:00.119 }, 00:06:00.119 { 00:06:00.119 "nbd_device": "/dev/nbd1", 00:06:00.119 "bdev_name": "Malloc1" 00:06:00.119 } 00:06:00.119 ]' 00:06:00.119 13:36:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.119 { 00:06:00.119 "nbd_device": "/dev/nbd0", 00:06:00.119 "bdev_name": "Malloc0" 00:06:00.119 }, 00:06:00.119 { 00:06:00.119 "nbd_device": "/dev/nbd1", 00:06:00.119 "bdev_name": "Malloc1" 00:06:00.119 } 00:06:00.119 ]' 00:06:00.119 13:36:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.119 13:36:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.119 /dev/nbd1' 00:06:00.119 13:36:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.380 /dev/nbd1' 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.380 256+0 records in 00:06:00.380 256+0 records out 00:06:00.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416827 s, 252 MB/s 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.380 256+0 records in 00:06:00.380 256+0 records out 00:06:00.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164038 s, 63.9 MB/s 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.380 256+0 records in 00:06:00.380 256+0 records out 00:06:00.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169867 s, 61.7 MB/s 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.380 13:36:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.640 13:36:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.640 13:36:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.640 13:36:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.640 13:36:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.640 13:36:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.640 13:36:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.640 13:36:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.640 13:36:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.640 13:36:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.640 13:36:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.640 13:36:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.899 13:36:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.899 13:36:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.899 13:36:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.899 13:36:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.899 13:36:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.899 13:36:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.899 13:36:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.899 13:36:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.899 13:36:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.899 13:36:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.899 13:36:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.899 13:36:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.899 13:36:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.159 13:36:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.159 [2024-07-15 13:36:27.565219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.159 [2024-07-15 13:36:27.628043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.159 [2024-07-15 13:36:27.628046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.159 [2024-07-15 13:36:27.660097] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.159 [2024-07-15 13:36:27.660135] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.457 13:36:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.457 13:36:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:04.457 spdk_app_start Round 2 00:06:04.457 13:36:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 877368 /var/tmp/spdk-nbd.sock 00:06:04.457 13:36:30 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 877368 ']' 00:06:04.457 13:36:30 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.457 13:36:30 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.457 13:36:30 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.457 13:36:30 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.457 13:36:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.457 13:36:30 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.457 13:36:30 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:04.457 13:36:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.457 Malloc0 00:06:04.457 13:36:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.457 Malloc1 00:06:04.457 13:36:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.457 13:36:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.457 13:36:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.457 13:36:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.457 13:36:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.457 13:36:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.457 13:36:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.457 13:36:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.457 13:36:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.457 13:36:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.457 13:36:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.457 13:36:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.457 13:36:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:04.457 13:36:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.457 13:36:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.457 13:36:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.717 /dev/nbd0 00:06:04.717 13:36:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.717 13:36:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.717 13:36:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:04.717 13:36:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:04.717 13:36:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:04.717 13:36:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:04.717 13:36:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:04.717 13:36:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:04.717 13:36:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:04.717 13:36:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:04.717 13:36:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.717 1+0 records in 00:06:04.717 1+0 records out 00:06:04.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021134 s, 19.4 MB/s 00:06:04.717 13:36:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.717 13:36:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:04.717 13:36:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.717 13:36:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:04.717 13:36:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:04.717 13:36:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.717 13:36:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.717 13:36:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:04.717 /dev/nbd1 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.978 13:36:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:04.978 13:36:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:04.978 13:36:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:04.978 13:36:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:04.978 13:36:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:04.978 13:36:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:04.978 13:36:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:04.978 13:36:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:04.978 13:36:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.978 1+0 records in 00:06:04.978 1+0 records out 00:06:04.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286171 s, 14.3 MB/s 00:06:04.978 13:36:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.978 13:36:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:04.978 13:36:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.978 13:36:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:04.978 13:36:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.978 { 00:06:04.978 "nbd_device": "/dev/nbd0", 00:06:04.978 "bdev_name": "Malloc0" 00:06:04.978 }, 00:06:04.978 { 00:06:04.978 "nbd_device": "/dev/nbd1", 00:06:04.978 "bdev_name": "Malloc1" 00:06:04.978 } 00:06:04.978 ]' 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.978 { 00:06:04.978 "nbd_device": "/dev/nbd0", 00:06:04.978 "bdev_name": "Malloc0" 00:06:04.978 }, 00:06:04.978 { 00:06:04.978 "nbd_device": "/dev/nbd1", 00:06:04.978 "bdev_name": "Malloc1" 00:06:04.978 } 00:06:04.978 ]' 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.978 /dev/nbd1' 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.978 /dev/nbd1' 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.978 13:36:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.978 256+0 records in 00:06:04.978 256+0 records out 00:06:04.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012429 s, 84.4 MB/s 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.239 256+0 records in 00:06:05.239 256+0 records out 00:06:05.239 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161907 s, 64.8 MB/s 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.239 256+0 records in 00:06:05.239 256+0 records out 00:06:05.239 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171653 s, 61.1 MB/s 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.239 13:36:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.500 13:36:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.500 13:36:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.500 13:36:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.500 13:36:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.500 13:36:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.500 13:36:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.500 13:36:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.500 13:36:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.500 13:36:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.500 13:36:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.500 13:36:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.760 13:36:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.760 13:36:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.760 13:36:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.760 13:36:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.760 13:36:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.760 13:36:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.760 13:36:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.760 13:36:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.760 13:36:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.760 13:36:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.761 13:36:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.761 13:36:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.761 13:36:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.021 13:36:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.021 [2024-07-15 13:36:32.413118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.021 [2024-07-15 13:36:32.475813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.021 [2024-07-15 13:36:32.475815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.021 [2024-07-15 13:36:32.507276] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.021 [2024-07-15 13:36:32.507312] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.320 13:36:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 877368 /var/tmp/spdk-nbd.sock 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 877368 ']' 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:09.320 13:36:35 event.app_repeat -- event/event.sh@39 -- # killprocess 877368 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 877368 ']' 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 877368 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 877368 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 877368' 00:06:09.320 killing process with pid 877368 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@967 -- # kill 877368 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@972 -- # wait 877368 00:06:09.320 spdk_app_start is called in Round 0. 00:06:09.320 Shutdown signal received, stop current app iteration 00:06:09.320 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:06:09.320 spdk_app_start is called in Round 1. 00:06:09.320 Shutdown signal received, stop current app iteration 00:06:09.320 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:06:09.320 spdk_app_start is called in Round 2. 00:06:09.320 Shutdown signal received, stop current app iteration 00:06:09.320 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:06:09.320 spdk_app_start is called in Round 3. 00:06:09.320 Shutdown signal received, stop current app iteration 00:06:09.320 13:36:35 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:09.320 13:36:35 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:09.320 00:06:09.320 real 0m15.008s 00:06:09.320 user 0m32.378s 00:06:09.320 sys 0m2.080s 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.320 13:36:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.320 ************************************ 00:06:09.320 END TEST app_repeat 00:06:09.320 ************************************ 00:06:09.320 13:36:35 event -- common/autotest_common.sh@1142 -- # return 0 00:06:09.320 13:36:35 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:09.320 13:36:35 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:09.320 13:36:35 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.320 13:36:35 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.320 13:36:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.320 ************************************ 00:06:09.320 START TEST cpu_locks 00:06:09.320 ************************************ 00:06:09.320 13:36:35 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:09.320 * Looking for test storage... 00:06:09.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:09.320 13:36:35 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:09.320 13:36:35 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:09.320 13:36:35 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:09.320 13:36:35 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:09.320 13:36:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.320 13:36:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.320 13:36:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.320 ************************************ 00:06:09.320 START TEST default_locks 00:06:09.320 ************************************ 00:06:09.320 13:36:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:09.320 13:36:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=880615 00:06:09.320 13:36:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 880615 00:06:09.320 13:36:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.320 13:36:35 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 880615 ']' 00:06:09.320 13:36:35 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.320 13:36:35 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.320 13:36:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.320 13:36:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.320 13:36:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.581 [2024-07-15 13:36:35.904907] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:09.581 [2024-07-15 13:36:35.904959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid880615 ] 00:06:09.581 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.581 [2024-07-15 13:36:35.965341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.581 [2024-07-15 13:36:36.034044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.154 13:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.154 13:36:36 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:10.154 13:36:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 880615 00:06:10.154 13:36:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 880615 00:06:10.154 13:36:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.726 lslocks: write error 00:06:10.726 13:36:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 880615 00:06:10.726 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 880615 ']' 00:06:10.726 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 880615 00:06:10.726 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:10.726 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.726 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 880615 00:06:10.726 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.726 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.726 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 880615' 00:06:10.726 killing process with pid 880615 00:06:10.726 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 880615 00:06:10.726 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 880615 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 880615 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 880615 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 880615 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 880615 ']' 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (880615) - No such process 00:06:10.987 ERROR: process (pid: 880615) is no longer running 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:10.987 00:06:10.987 real 0m1.542s 00:06:10.987 user 0m1.636s 00:06:10.987 sys 0m0.498s 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.987 13:36:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.987 ************************************ 00:06:10.987 END TEST default_locks 00:06:10.987 ************************************ 00:06:10.987 13:36:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:10.987 13:36:37 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:10.987 13:36:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.987 13:36:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.987 13:36:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.987 ************************************ 00:06:10.987 START TEST default_locks_via_rpc 00:06:10.987 ************************************ 00:06:10.987 13:36:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:10.987 13:36:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=880977 00:06:10.987 13:36:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 880977 00:06:10.987 13:36:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.987 13:36:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 880977 ']' 00:06:10.987 13:36:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.987 13:36:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.987 13:36:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.988 13:36:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.988 13:36:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.248 [2024-07-15 13:36:37.513075] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:11.248 [2024-07-15 13:36:37.513141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid880977 ] 00:06:11.248 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.248 [2024-07-15 13:36:37.571199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.248 [2024-07-15 13:36:37.635931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.820 13:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.820 13:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:11.820 13:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:11.820 13:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.820 13:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.820 13:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.820 13:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:11.820 13:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.820 13:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.820 13:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.820 13:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.820 13:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.820 13:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.820 13:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.820 13:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 880977 00:06:11.820 13:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 880977 00:06:11.820 13:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.394 13:36:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 880977 00:06:12.394 13:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 880977 ']' 00:06:12.394 13:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 880977 00:06:12.394 13:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:12.394 13:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.394 13:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 880977 00:06:12.394 13:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.394 13:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.394 13:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 880977' 00:06:12.394 killing process with pid 880977 00:06:12.394 13:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 880977 00:06:12.394 13:36:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 880977 00:06:12.655 00:06:12.655 real 0m1.556s 00:06:12.655 user 0m1.664s 00:06:12.655 sys 0m0.489s 00:06:12.655 13:36:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.655 13:36:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.655 ************************************ 00:06:12.655 END TEST default_locks_via_rpc 00:06:12.655 ************************************ 00:06:12.655 13:36:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:12.655 13:36:39 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:12.655 13:36:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.655 13:36:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.655 13:36:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.655 ************************************ 00:06:12.655 START TEST non_locking_app_on_locked_coremask 00:06:12.655 ************************************ 00:06:12.655 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:12.655 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=881348 00:06:12.655 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 881348 /var/tmp/spdk.sock 00:06:12.655 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 881348 ']' 00:06:12.655 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.655 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.655 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.655 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.655 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.655 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.655 [2024-07-15 13:36:39.134696] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:12.656 [2024-07-15 13:36:39.134747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881348 ] 00:06:12.656 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.915 [2024-07-15 13:36:39.195915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.915 [2024-07-15 13:36:39.266337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.485 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.485 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:13.486 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=881388 00:06:13.486 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 881388 /var/tmp/spdk2.sock 00:06:13.486 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 881388 ']' 00:06:13.486 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.486 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.486 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.486 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.486 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.486 13:36:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:13.486 [2024-07-15 13:36:39.944468] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:13.486 [2024-07-15 13:36:39.944522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881388 ] 00:06:13.486 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.751 [2024-07-15 13:36:40.035058] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.751 [2024-07-15 13:36:40.035092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.751 [2024-07-15 13:36:40.165169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.332 13:36:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.332 13:36:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:14.332 13:36:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 881348 00:06:14.332 13:36:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.332 13:36:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 881348 00:06:14.903 lslocks: write error 00:06:14.903 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 881348 00:06:14.903 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 881348 ']' 00:06:14.903 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 881348 00:06:14.903 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:14.903 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.903 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 881348 00:06:14.903 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.903 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.903 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 881348' 00:06:14.903 killing process with pid 881348 00:06:14.903 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 881348 00:06:14.903 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 881348 00:06:15.167 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 881388 00:06:15.167 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 881388 ']' 00:06:15.167 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 881388 00:06:15.167 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:15.167 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.167 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 881388 00:06:15.167 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.167 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.167 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 881388' 00:06:15.167 killing process with pid 881388 00:06:15.167 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 881388 00:06:15.167 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 881388 00:06:15.430 00:06:15.430 real 0m2.813s 00:06:15.430 user 0m3.069s 00:06:15.430 sys 0m0.837s 00:06:15.430 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.430 13:36:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.430 ************************************ 00:06:15.430 END TEST non_locking_app_on_locked_coremask 00:06:15.430 ************************************ 00:06:15.430 13:36:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:15.430 13:36:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:15.430 13:36:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.430 13:36:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.430 13:36:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.691 ************************************ 00:06:15.691 START TEST locking_app_on_unlocked_coremask 00:06:15.691 ************************************ 00:06:15.691 13:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:15.691 13:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=881961 00:06:15.691 13:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 881961 /var/tmp/spdk.sock 00:06:15.691 13:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:15.691 13:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 881961 ']' 00:06:15.691 13:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.691 13:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.691 13:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.691 13:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.691 13:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.691 [2024-07-15 13:36:42.026607] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:15.691 [2024-07-15 13:36:42.026667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881961 ] 00:06:15.691 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.691 [2024-07-15 13:36:42.090958] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.691 [2024-07-15 13:36:42.090998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.691 [2024-07-15 13:36:42.162118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.633 13:36:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.633 13:36:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:16.633 13:36:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:16.633 13:36:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=882068 00:06:16.633 13:36:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 882068 /var/tmp/spdk2.sock 00:06:16.633 13:36:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 882068 ']' 00:06:16.633 13:36:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.633 13:36:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.633 13:36:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.633 13:36:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.633 13:36:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.633 [2024-07-15 13:36:42.852966] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:16.633 [2024-07-15 13:36:42.853015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid882068 ] 00:06:16.633 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.633 [2024-07-15 13:36:42.942242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.633 [2024-07-15 13:36:43.075701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.216 13:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.216 13:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:17.216 13:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 882068 00:06:17.216 13:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 882068 00:06:17.216 13:36:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.850 lslocks: write error 00:06:17.850 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 881961 00:06:17.850 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 881961 ']' 00:06:17.850 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 881961 00:06:17.850 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:17.850 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.850 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 881961 00:06:17.850 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.850 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.850 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 881961' 00:06:17.850 killing process with pid 881961 00:06:17.850 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 881961 00:06:17.850 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 881961 00:06:18.119 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 882068 00:06:18.119 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 882068 ']' 00:06:18.119 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 882068 00:06:18.119 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:18.119 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.119 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 882068 00:06:18.119 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.119 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.119 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 882068' 00:06:18.119 killing process with pid 882068 00:06:18.119 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 882068 00:06:18.119 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 882068 00:06:18.380 00:06:18.380 real 0m2.869s 00:06:18.380 user 0m3.137s 00:06:18.380 sys 0m0.849s 00:06:18.380 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.380 13:36:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.380 ************************************ 00:06:18.380 END TEST locking_app_on_unlocked_coremask 00:06:18.380 ************************************ 00:06:18.380 13:36:44 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:18.380 13:36:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:18.380 13:36:44 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.380 13:36:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.380 13:36:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.641 ************************************ 00:06:18.641 START TEST locking_app_on_locked_coremask 00:06:18.641 ************************************ 00:06:18.641 13:36:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:18.641 13:36:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=882462 00:06:18.641 13:36:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 882462 /var/tmp/spdk.sock 00:06:18.641 13:36:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.641 13:36:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 882462 ']' 00:06:18.641 13:36:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.641 13:36:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.641 13:36:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.641 13:36:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.641 13:36:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.641 [2024-07-15 13:36:44.965171] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:18.641 [2024-07-15 13:36:44.965223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid882462 ] 00:06:18.641 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.641 [2024-07-15 13:36:45.024183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.641 [2024-07-15 13:36:45.091473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.212 13:36:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.212 13:36:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:19.212 13:36:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=882776 00:06:19.212 13:36:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 882776 /var/tmp/spdk2.sock 00:06:19.212 13:36:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:19.212 13:36:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:19.212 13:36:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 882776 /var/tmp/spdk2.sock 00:06:19.212 13:36:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:19.212 13:36:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.213 13:36:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:19.213 13:36:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.213 13:36:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 882776 /var/tmp/spdk2.sock 00:06:19.213 13:36:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 882776 ']' 00:06:19.213 13:36:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.213 13:36:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.213 13:36:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.213 13:36:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.213 13:36:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.473 [2024-07-15 13:36:45.752371] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:19.473 [2024-07-15 13:36:45.752422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid882776 ] 00:06:19.473 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.473 [2024-07-15 13:36:45.838234] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 882462 has claimed it. 00:06:19.473 [2024-07-15 13:36:45.838276] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (882776) - No such process 00:06:20.045 ERROR: process (pid: 882776) is no longer running 00:06:20.045 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.045 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:20.045 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:20.045 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:20.045 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:20.045 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:20.045 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 882462 00:06:20.045 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 882462 00:06:20.045 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.305 lslocks: write error 00:06:20.305 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 882462 00:06:20.306 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 882462 ']' 00:06:20.306 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 882462 00:06:20.306 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:20.306 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.306 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 882462 00:06:20.306 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.306 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.306 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 882462' 00:06:20.306 killing process with pid 882462 00:06:20.306 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 882462 00:06:20.306 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 882462 00:06:20.567 00:06:20.567 real 0m2.032s 00:06:20.567 user 0m2.259s 00:06:20.567 sys 0m0.536s 00:06:20.567 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.567 13:36:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.567 ************************************ 00:06:20.567 END TEST locking_app_on_locked_coremask 00:06:20.567 ************************************ 00:06:20.567 13:36:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:20.567 13:36:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:20.567 13:36:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.567 13:36:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.567 13:36:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.567 ************************************ 00:06:20.567 START TEST locking_overlapped_coremask 00:06:20.567 ************************************ 00:06:20.567 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:20.567 13:36:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=883035 00:06:20.567 13:36:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 883035 /var/tmp/spdk.sock 00:06:20.567 13:36:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:20.567 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 883035 ']' 00:06:20.567 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.567 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.567 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.567 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.567 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.567 [2024-07-15 13:36:47.068817] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:20.567 [2024-07-15 13:36:47.068873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883035 ] 00:06:20.827 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.827 [2024-07-15 13:36:47.130243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.827 [2024-07-15 13:36:47.204179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.827 [2024-07-15 13:36:47.204422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.827 [2024-07-15 13:36:47.204425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.398 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.398 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:21.398 13:36:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=883155 00:06:21.398 13:36:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:21.398 13:36:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 883155 /var/tmp/spdk2.sock 00:06:21.398 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:21.398 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 883155 /var/tmp/spdk2.sock 00:06:21.398 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:21.398 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.398 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:21.398 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.398 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 883155 /var/tmp/spdk2.sock 00:06:21.398 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 883155 ']' 00:06:21.398 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.398 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.398 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.398 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.398 13:36:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.398 [2024-07-15 13:36:47.886894] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:21.398 [2024-07-15 13:36:47.886946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883155 ] 00:06:21.398 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.659 [2024-07-15 13:36:47.956660] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 883035 has claimed it. 00:06:21.659 [2024-07-15 13:36:47.956692] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:22.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (883155) - No such process 00:06:22.231 ERROR: process (pid: 883155) is no longer running 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 883035 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 883035 ']' 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 883035 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 883035 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 883035' 00:06:22.231 killing process with pid 883035 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 883035 00:06:22.231 13:36:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 883035 00:06:22.492 00:06:22.492 real 0m1.757s 00:06:22.492 user 0m4.964s 00:06:22.492 sys 0m0.369s 00:06:22.492 13:36:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.492 13:36:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.492 ************************************ 00:06:22.492 END TEST locking_overlapped_coremask 00:06:22.492 ************************************ 00:06:22.492 13:36:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:22.492 13:36:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:22.492 13:36:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.492 13:36:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.492 13:36:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.492 ************************************ 00:06:22.492 START TEST locking_overlapped_coremask_via_rpc 00:06:22.492 ************************************ 00:06:22.492 13:36:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:22.492 13:36:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=883492 00:06:22.492 13:36:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 883492 /var/tmp/spdk.sock 00:06:22.492 13:36:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:22.492 13:36:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 883492 ']' 00:06:22.492 13:36:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.492 13:36:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.492 13:36:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.492 13:36:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.492 13:36:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.492 [2024-07-15 13:36:48.901548] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:22.492 [2024-07-15 13:36:48.901602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883492 ] 00:06:22.492 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.492 [2024-07-15 13:36:48.962949] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.492 [2024-07-15 13:36:48.962982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.753 [2024-07-15 13:36:49.037553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.753 [2024-07-15 13:36:49.037668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.753 [2024-07-15 13:36:49.037672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.325 13:36:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.325 13:36:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:23.325 13:36:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=883525 00:06:23.325 13:36:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 883525 /var/tmp/spdk2.sock 00:06:23.325 13:36:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 883525 ']' 00:06:23.325 13:36:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:23.325 13:36:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.325 13:36:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.325 13:36:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.325 13:36:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.325 13:36:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.325 [2024-07-15 13:36:49.730299] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:23.325 [2024-07-15 13:36:49.730351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883525 ] 00:06:23.325 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.325 [2024-07-15 13:36:49.802459] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.325 [2024-07-15 13:36:49.802482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.586 [2024-07-15 13:36:49.912482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.586 [2024-07-15 13:36:49.912640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.586 [2024-07-15 13:36:49.912642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.156 [2024-07-15 13:36:50.506184] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 883492 has claimed it. 00:06:24.156 request: 00:06:24.156 { 00:06:24.156 "method": "framework_enable_cpumask_locks", 00:06:24.156 "req_id": 1 00:06:24.156 } 00:06:24.156 Got JSON-RPC error response 00:06:24.156 response: 00:06:24.156 { 00:06:24.156 "code": -32603, 00:06:24.156 "message": "Failed to claim CPU core: 2" 00:06:24.156 } 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 883492 /var/tmp/spdk.sock 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 883492 ']' 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.156 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.416 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.416 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:24.416 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 883525 /var/tmp/spdk2.sock 00:06:24.416 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 883525 ']' 00:06:24.416 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.416 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.416 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.416 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.416 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.416 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.416 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:24.416 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:24.416 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:24.416 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:24.416 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:24.416 00:06:24.416 real 0m2.012s 00:06:24.416 user 0m0.750s 00:06:24.416 sys 0m0.183s 00:06:24.417 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.417 13:36:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.417 ************************************ 00:06:24.417 END TEST locking_overlapped_coremask_via_rpc 00:06:24.417 ************************************ 00:06:24.417 13:36:50 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:24.417 13:36:50 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:24.417 13:36:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 883492 ]] 00:06:24.417 13:36:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 883492 00:06:24.417 13:36:50 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 883492 ']' 00:06:24.417 13:36:50 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 883492 00:06:24.417 13:36:50 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:24.417 13:36:50 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.417 13:36:50 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 883492 00:06:24.676 13:36:50 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.676 13:36:50 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.676 13:36:50 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 883492' 00:06:24.676 killing process with pid 883492 00:06:24.676 13:36:50 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 883492 00:06:24.676 13:36:50 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 883492 00:06:24.676 13:36:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 883525 ]] 00:06:24.676 13:36:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 883525 00:06:24.676 13:36:51 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 883525 ']' 00:06:24.676 13:36:51 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 883525 00:06:24.676 13:36:51 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:24.676 13:36:51 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.676 13:36:51 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 883525 00:06:24.936 13:36:51 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:24.936 13:36:51 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:24.936 13:36:51 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 883525' 00:06:24.936 killing process with pid 883525 00:06:24.936 13:36:51 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 883525 00:06:24.936 13:36:51 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 883525 00:06:24.936 13:36:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.936 13:36:51 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:24.936 13:36:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 883492 ]] 00:06:24.936 13:36:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 883492 00:06:24.936 13:36:51 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 883492 ']' 00:06:24.936 13:36:51 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 883492 00:06:24.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (883492) - No such process 00:06:24.936 13:36:51 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 883492 is not found' 00:06:24.936 Process with pid 883492 is not found 00:06:24.936 13:36:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 883525 ]] 00:06:24.936 13:36:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 883525 00:06:24.936 13:36:51 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 883525 ']' 00:06:24.936 13:36:51 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 883525 00:06:24.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (883525) - No such process 00:06:24.936 13:36:51 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 883525 is not found' 00:06:24.936 Process with pid 883525 is not found 00:06:24.936 13:36:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.936 00:06:24.936 real 0m15.711s 00:06:24.936 user 0m27.018s 00:06:24.936 sys 0m4.627s 00:06:24.936 13:36:51 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.936 13:36:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.936 ************************************ 00:06:24.936 END TEST cpu_locks 00:06:24.936 ************************************ 00:06:24.936 13:36:51 event -- common/autotest_common.sh@1142 -- # return 0 00:06:24.936 00:06:24.936 real 0m40.586s 00:06:24.936 user 1m18.735s 00:06:24.936 sys 0m7.652s 00:06:24.936 13:36:51 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.936 13:36:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.936 ************************************ 00:06:24.936 END TEST event 00:06:24.936 ************************************ 00:06:25.223 13:36:51 -- common/autotest_common.sh@1142 -- # return 0 00:06:25.223 13:36:51 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:25.223 13:36:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.223 13:36:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.223 13:36:51 -- common/autotest_common.sh@10 -- # set +x 00:06:25.223 ************************************ 00:06:25.223 START TEST thread 00:06:25.223 ************************************ 00:06:25.223 13:36:51 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:25.223 * Looking for test storage... 00:06:25.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:25.223 13:36:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:25.223 13:36:51 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:25.223 13:36:51 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.223 13:36:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.223 ************************************ 00:06:25.223 START TEST thread_poller_perf 00:06:25.223 ************************************ 00:06:25.223 13:36:51 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:25.223 [2024-07-15 13:36:51.677135] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:25.223 [2024-07-15 13:36:51.677231] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883980 ] 00:06:25.223 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.223 [2024-07-15 13:36:51.744557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.482 [2024-07-15 13:36:51.822990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.482 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:26.423 ====================================== 00:06:26.423 busy:2409495608 (cyc) 00:06:26.423 total_run_count: 287000 00:06:26.423 tsc_hz: 2400000000 (cyc) 00:06:26.423 ====================================== 00:06:26.423 poller_cost: 8395 (cyc), 3497 (nsec) 00:06:26.423 00:06:26.423 real 0m1.228s 00:06:26.423 user 0m1.143s 00:06:26.423 sys 0m0.081s 00:06:26.423 13:36:52 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.423 13:36:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.423 ************************************ 00:06:26.423 END TEST thread_poller_perf 00:06:26.423 ************************************ 00:06:26.423 13:36:52 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:26.423 13:36:52 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.423 13:36:52 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:26.423 13:36:52 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.423 13:36:52 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.423 ************************************ 00:06:26.423 START TEST thread_poller_perf 00:06:26.423 ************************************ 00:06:26.423 13:36:52 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.684 [2024-07-15 13:36:52.954153] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:26.684 [2024-07-15 13:36:52.954256] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884321 ] 00:06:26.684 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.684 [2024-07-15 13:36:53.017055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.684 [2024-07-15 13:36:53.082272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.684 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:27.624 ====================================== 00:06:27.624 busy:2402007576 (cyc) 00:06:27.624 total_run_count: 3806000 00:06:27.624 tsc_hz: 2400000000 (cyc) 00:06:27.624 ====================================== 00:06:27.624 poller_cost: 631 (cyc), 262 (nsec) 00:06:27.624 00:06:27.624 real 0m1.204s 00:06:27.624 user 0m1.132s 00:06:27.624 sys 0m0.068s 00:06:27.624 13:36:54 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.624 13:36:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.624 ************************************ 00:06:27.624 END TEST thread_poller_perf 00:06:27.624 ************************************ 00:06:27.885 13:36:54 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:27.885 13:36:54 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:27.885 00:06:27.885 real 0m2.654s 00:06:27.885 user 0m2.350s 00:06:27.885 sys 0m0.310s 00:06:27.885 13:36:54 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.885 13:36:54 thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.885 ************************************ 00:06:27.885 END TEST thread 00:06:27.885 ************************************ 00:06:27.885 13:36:54 -- common/autotest_common.sh@1142 -- # return 0 00:06:27.885 13:36:54 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:27.885 13:36:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.885 13:36:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.885 13:36:54 -- common/autotest_common.sh@10 -- # set +x 00:06:27.886 ************************************ 00:06:27.886 START TEST accel 00:06:27.886 ************************************ 00:06:27.886 13:36:54 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:27.886 * Looking for test storage... 00:06:27.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:27.886 13:36:54 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:27.886 13:36:54 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:27.886 13:36:54 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:27.886 13:36:54 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=884711 00:06:27.886 13:36:54 accel -- accel/accel.sh@63 -- # waitforlisten 884711 00:06:27.886 13:36:54 accel -- common/autotest_common.sh@829 -- # '[' -z 884711 ']' 00:06:27.886 13:36:54 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.886 13:36:54 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.886 13:36:54 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.886 13:36:54 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:27.886 13:36:54 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.886 13:36:54 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:27.886 13:36:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.886 13:36:54 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.886 13:36:54 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.886 13:36:54 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.886 13:36:54 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.886 13:36:54 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.886 13:36:54 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:27.886 13:36:54 accel -- accel/accel.sh@41 -- # jq -r . 00:06:27.886 [2024-07-15 13:36:54.406029] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:27.886 [2024-07-15 13:36:54.406104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884711 ] 00:06:28.147 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.147 [2024-07-15 13:36:54.471961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.147 [2024-07-15 13:36:54.545496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.719 13:36:55 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.719 13:36:55 accel -- common/autotest_common.sh@862 -- # return 0 00:06:28.719 13:36:55 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:28.719 13:36:55 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:28.719 13:36:55 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:28.719 13:36:55 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:28.719 13:36:55 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:28.719 13:36:55 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:28.719 13:36:55 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:28.719 13:36:55 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.719 13:36:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.719 13:36:55 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.719 13:36:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.719 13:36:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.719 13:36:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.719 13:36:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.719 13:36:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.719 13:36:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.719 13:36:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.719 13:36:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.719 13:36:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.719 13:36:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.719 13:36:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.719 13:36:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.719 13:36:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.719 13:36:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.719 13:36:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.719 13:36:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.719 13:36:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.719 13:36:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.719 13:36:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.719 13:36:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.719 13:36:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.719 13:36:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.719 13:36:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.719 13:36:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.719 13:36:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.719 13:36:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.719 13:36:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.719 13:36:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.719 13:36:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.719 13:36:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.719 13:36:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.719 13:36:55 accel -- accel/accel.sh@75 -- # killprocess 884711 00:06:28.719 13:36:55 accel -- common/autotest_common.sh@948 -- # '[' -z 884711 ']' 00:06:28.719 13:36:55 accel -- common/autotest_common.sh@952 -- # kill -0 884711 00:06:28.719 13:36:55 accel -- common/autotest_common.sh@953 -- # uname 00:06:28.719 13:36:55 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.719 13:36:55 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 884711 00:06:28.980 13:36:55 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:28.980 13:36:55 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:28.980 13:36:55 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 884711' 00:06:28.980 killing process with pid 884711 00:06:28.980 13:36:55 accel -- common/autotest_common.sh@967 -- # kill 884711 00:06:28.980 13:36:55 accel -- common/autotest_common.sh@972 -- # wait 884711 00:06:28.980 13:36:55 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:28.980 13:36:55 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:28.980 13:36:55 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:28.980 13:36:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.980 13:36:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.241 13:36:55 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:29.241 13:36:55 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:29.241 13:36:55 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:29.241 13:36:55 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.241 13:36:55 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.241 13:36:55 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.241 13:36:55 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.241 13:36:55 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.241 13:36:55 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:29.241 13:36:55 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:29.241 13:36:55 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.241 13:36:55 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:29.241 13:36:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.241 13:36:55 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:29.241 13:36:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:29.241 13:36:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.241 13:36:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.241 ************************************ 00:06:29.241 START TEST accel_missing_filename 00:06:29.241 ************************************ 00:06:29.241 13:36:55 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:29.241 13:36:55 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:29.241 13:36:55 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:29.241 13:36:55 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:29.241 13:36:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.241 13:36:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:29.241 13:36:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.241 13:36:55 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:29.241 13:36:55 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:29.241 13:36:55 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:29.241 13:36:55 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.241 13:36:55 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.241 13:36:55 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.241 13:36:55 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.241 13:36:55 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.241 13:36:55 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:29.241 13:36:55 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:29.241 [2024-07-15 13:36:55.651001] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:29.241 [2024-07-15 13:36:55.651103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid885065 ] 00:06:29.241 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.241 [2024-07-15 13:36:55.714773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.502 [2024-07-15 13:36:55.778670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.502 [2024-07-15 13:36:55.810481] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.502 [2024-07-15 13:36:55.847339] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:29.502 A filename is required. 00:06:29.502 13:36:55 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:29.502 13:36:55 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:29.502 13:36:55 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:29.502 13:36:55 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:29.502 13:36:55 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:29.502 13:36:55 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:29.502 00:06:29.502 real 0m0.282s 00:06:29.502 user 0m0.211s 00:06:29.502 sys 0m0.111s 00:06:29.502 13:36:55 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.502 13:36:55 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:29.503 ************************************ 00:06:29.503 END TEST accel_missing_filename 00:06:29.503 ************************************ 00:06:29.503 13:36:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.503 13:36:55 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.503 13:36:55 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:29.503 13:36:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.503 13:36:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.503 ************************************ 00:06:29.503 START TEST accel_compress_verify 00:06:29.503 ************************************ 00:06:29.503 13:36:55 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.503 13:36:55 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:29.503 13:36:55 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.503 13:36:55 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:29.503 13:36:55 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.503 13:36:55 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:29.503 13:36:55 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.503 13:36:55 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.503 13:36:55 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:29.503 13:36:55 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:29.503 13:36:55 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.503 13:36:55 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.503 13:36:55 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.503 13:36:55 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.503 13:36:55 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.503 13:36:55 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:29.503 13:36:55 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:29.503 [2024-07-15 13:36:56.005421] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:29.503 [2024-07-15 13:36:56.005484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid885104 ] 00:06:29.787 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.787 [2024-07-15 13:36:56.066775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.787 [2024-07-15 13:36:56.129998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.787 [2024-07-15 13:36:56.161884] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.787 [2024-07-15 13:36:56.198933] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:29.787 00:06:29.787 Compression does not support the verify option, aborting. 00:06:29.787 13:36:56 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:29.787 13:36:56 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:29.787 13:36:56 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:29.787 13:36:56 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:29.787 13:36:56 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:29.787 13:36:56 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:29.787 00:06:29.787 real 0m0.277s 00:06:29.787 user 0m0.213s 00:06:29.787 sys 0m0.104s 00:06:29.787 13:36:56 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.787 13:36:56 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:29.787 ************************************ 00:06:29.787 END TEST accel_compress_verify 00:06:29.787 ************************************ 00:06:29.787 13:36:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.787 13:36:56 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:29.787 13:36:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:29.787 13:36:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.787 13:36:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.048 ************************************ 00:06:30.048 START TEST accel_wrong_workload 00:06:30.048 ************************************ 00:06:30.048 13:36:56 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:30.048 13:36:56 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:30.048 13:36:56 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:30.048 13:36:56 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:30.048 13:36:56 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.048 13:36:56 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:30.048 13:36:56 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.048 13:36:56 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:30.048 13:36:56 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:30.048 13:36:56 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:30.048 13:36:56 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.048 13:36:56 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.048 13:36:56 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.048 13:36:56 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.048 13:36:56 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.048 13:36:56 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:30.048 13:36:56 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:30.048 Unsupported workload type: foobar 00:06:30.048 [2024-07-15 13:36:56.357630] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:30.048 accel_perf options: 00:06:30.048 [-h help message] 00:06:30.048 [-q queue depth per core] 00:06:30.048 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:30.048 [-T number of threads per core 00:06:30.048 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:30.048 [-t time in seconds] 00:06:30.048 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:30.048 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:30.048 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:30.048 [-l for compress/decompress workloads, name of uncompressed input file 00:06:30.048 [-S for crc32c workload, use this seed value (default 0) 00:06:30.048 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:30.048 [-f for fill workload, use this BYTE value (default 255) 00:06:30.048 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:30.048 [-y verify result if this switch is on] 00:06:30.048 [-a tasks to allocate per core (default: same value as -q)] 00:06:30.048 Can be used to spread operations across a wider range of memory. 00:06:30.048 13:36:56 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:30.048 13:36:56 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.048 13:36:56 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.048 13:36:56 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.048 00:06:30.048 real 0m0.036s 00:06:30.048 user 0m0.021s 00:06:30.048 sys 0m0.015s 00:06:30.048 13:36:56 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.048 13:36:56 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:30.048 ************************************ 00:06:30.048 END TEST accel_wrong_workload 00:06:30.048 ************************************ 00:06:30.048 Error: writing output failed: Broken pipe 00:06:30.048 13:36:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.048 13:36:56 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:30.048 13:36:56 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:30.048 13:36:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.048 13:36:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.048 ************************************ 00:06:30.048 START TEST accel_negative_buffers 00:06:30.048 ************************************ 00:06:30.048 13:36:56 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:30.048 13:36:56 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:30.048 13:36:56 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:30.048 13:36:56 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:30.048 13:36:56 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.048 13:36:56 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:30.048 13:36:56 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.048 13:36:56 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:30.048 13:36:56 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:30.048 13:36:56 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:30.048 13:36:56 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.048 13:36:56 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.048 13:36:56 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.048 13:36:56 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.048 13:36:56 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.048 13:36:56 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:30.048 13:36:56 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:30.048 -x option must be non-negative. 00:06:30.048 [2024-07-15 13:36:56.466304] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:30.048 accel_perf options: 00:06:30.048 [-h help message] 00:06:30.048 [-q queue depth per core] 00:06:30.048 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:30.048 [-T number of threads per core 00:06:30.048 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:30.048 [-t time in seconds] 00:06:30.048 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:30.048 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:30.048 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:30.048 [-l for compress/decompress workloads, name of uncompressed input file 00:06:30.048 [-S for crc32c workload, use this seed value (default 0) 00:06:30.048 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:30.049 [-f for fill workload, use this BYTE value (default 255) 00:06:30.049 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:30.049 [-y verify result if this switch is on] 00:06:30.049 [-a tasks to allocate per core (default: same value as -q)] 00:06:30.049 Can be used to spread operations across a wider range of memory. 00:06:30.049 13:36:56 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:30.049 13:36:56 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.049 13:36:56 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.049 13:36:56 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.049 00:06:30.049 real 0m0.034s 00:06:30.049 user 0m0.020s 00:06:30.049 sys 0m0.014s 00:06:30.049 13:36:56 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.049 13:36:56 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:30.049 ************************************ 00:06:30.049 END TEST accel_negative_buffers 00:06:30.049 ************************************ 00:06:30.049 Error: writing output failed: Broken pipe 00:06:30.049 13:36:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.049 13:36:56 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:30.049 13:36:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:30.049 13:36:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.049 13:36:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.049 ************************************ 00:06:30.049 START TEST accel_crc32c 00:06:30.049 ************************************ 00:06:30.049 13:36:56 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:30.049 13:36:56 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:30.049 13:36:56 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:30.049 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.049 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.049 13:36:56 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:30.049 13:36:56 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:30.049 13:36:56 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:30.049 13:36:56 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.049 13:36:56 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.049 13:36:56 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.049 13:36:56 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.049 13:36:56 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.049 13:36:56 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:30.049 13:36:56 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:30.310 [2024-07-15 13:36:56.574682] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:30.310 [2024-07-15 13:36:56.574774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid885166 ] 00:06:30.310 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.310 [2024-07-15 13:36:56.636477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.310 [2024-07-15 13:36:56.699950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.310 13:36:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.311 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.311 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.311 13:36:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.311 13:36:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.311 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.311 13:36:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:31.697 13:36:57 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.697 00:06:31.697 real 0m1.284s 00:06:31.697 user 0m1.196s 00:06:31.697 sys 0m0.100s 00:06:31.697 13:36:57 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.697 13:36:57 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:31.697 ************************************ 00:06:31.697 END TEST accel_crc32c 00:06:31.697 ************************************ 00:06:31.697 13:36:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.697 13:36:57 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:31.697 13:36:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:31.697 13:36:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.697 13:36:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.697 ************************************ 00:06:31.697 START TEST accel_crc32c_C2 00:06:31.697 ************************************ 00:06:31.697 13:36:57 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:31.697 13:36:57 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.697 13:36:57 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:31.697 13:36:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.697 13:36:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.697 13:36:57 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:31.697 13:36:57 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:31.697 13:36:57 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.697 13:36:57 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.697 13:36:57 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.697 13:36:57 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.697 13:36:57 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.697 13:36:57 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.697 13:36:57 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:31.697 13:36:57 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:31.697 [2024-07-15 13:36:57.930411] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:31.697 [2024-07-15 13:36:57.930477] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid885521 ] 00:06:31.697 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.697 [2024-07-15 13:36:57.991258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.697 [2024-07-15 13:36:58.057419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.697 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.698 13:36:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.084 00:06:33.084 real 0m1.283s 00:06:33.084 user 0m1.195s 00:06:33.084 sys 0m0.100s 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.084 13:36:59 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:33.084 ************************************ 00:06:33.084 END TEST accel_crc32c_C2 00:06:33.084 ************************************ 00:06:33.084 13:36:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.084 13:36:59 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:33.084 13:36:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:33.084 13:36:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.084 13:36:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.084 ************************************ 00:06:33.084 START TEST accel_copy 00:06:33.084 ************************************ 00:06:33.084 13:36:59 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:33.084 [2024-07-15 13:36:59.289046] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:33.084 [2024-07-15 13:36:59.289109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid885868 ] 00:06:33.084 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.084 [2024-07-15 13:36:59.349365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.084 [2024-07-15 13:36:59.415508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.084 13:36:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:34.026 13:37:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.026 00:06:34.026 real 0m1.284s 00:06:34.026 user 0m1.197s 00:06:34.026 sys 0m0.098s 00:06:34.026 13:37:00 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.026 13:37:00 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:34.026 ************************************ 00:06:34.026 END TEST accel_copy 00:06:34.026 ************************************ 00:06:34.287 13:37:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.287 13:37:00 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.287 13:37:00 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:34.287 13:37:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.287 13:37:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.287 ************************************ 00:06:34.287 START TEST accel_fill 00:06:34.287 ************************************ 00:06:34.287 13:37:00 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.287 13:37:00 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:34.287 13:37:00 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:34.287 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.287 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.287 13:37:00 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.287 13:37:00 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.287 13:37:00 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:34.287 13:37:00 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.287 13:37:00 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.287 13:37:00 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.287 13:37:00 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.287 13:37:00 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.287 13:37:00 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:34.287 13:37:00 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:34.287 [2024-07-15 13:37:00.648292] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:34.287 [2024-07-15 13:37:00.648383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886207 ] 00:06:34.287 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.287 [2024-07-15 13:37:00.711457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.287 [2024-07-15 13:37:00.782950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.548 13:37:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:35.562 13:37:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.562 00:06:35.562 real 0m1.293s 00:06:35.562 user 0m1.200s 00:06:35.562 sys 0m0.105s 00:06:35.562 13:37:01 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.562 13:37:01 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:35.562 ************************************ 00:06:35.562 END TEST accel_fill 00:06:35.562 ************************************ 00:06:35.562 13:37:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.562 13:37:01 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:35.562 13:37:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:35.562 13:37:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.562 13:37:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.562 ************************************ 00:06:35.562 START TEST accel_copy_crc32c 00:06:35.563 ************************************ 00:06:35.563 13:37:01 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:35.563 13:37:01 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:35.563 13:37:01 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:35.563 13:37:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.563 13:37:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.563 13:37:01 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:35.563 13:37:01 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:35.563 13:37:01 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:35.563 13:37:01 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.563 13:37:01 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.563 13:37:01 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.563 13:37:01 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.563 13:37:01 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.563 13:37:01 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:35.563 13:37:01 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:35.563 [2024-07-15 13:37:02.015700] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:35.563 [2024-07-15 13:37:02.015766] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886384 ] 00:06:35.563 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.563 [2024-07-15 13:37:02.078336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.824 [2024-07-15 13:37:02.149016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.824 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.825 13:37:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.775 00:06:36.775 real 0m1.290s 00:06:36.775 user 0m1.197s 00:06:36.775 sys 0m0.105s 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.775 13:37:03 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:36.775 ************************************ 00:06:36.775 END TEST accel_copy_crc32c 00:06:36.775 ************************************ 00:06:37.038 13:37:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.038 13:37:03 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:37.038 13:37:03 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:37.038 13:37:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.038 13:37:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.038 ************************************ 00:06:37.038 START TEST accel_copy_crc32c_C2 00:06:37.038 ************************************ 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:37.038 [2024-07-15 13:37:03.380657] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:37.038 [2024-07-15 13:37:03.380752] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886609 ] 00:06:37.038 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.038 [2024-07-15 13:37:03.442816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.038 [2024-07-15 13:37:03.511616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.038 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.039 13:37:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.424 00:06:38.424 real 0m1.290s 00:06:38.424 user 0m1.197s 00:06:38.424 sys 0m0.106s 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.424 13:37:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:38.424 ************************************ 00:06:38.424 END TEST accel_copy_crc32c_C2 00:06:38.424 ************************************ 00:06:38.424 13:37:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.424 13:37:04 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:38.424 13:37:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:38.424 13:37:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.424 13:37:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.424 ************************************ 00:06:38.424 START TEST accel_dualcast 00:06:38.424 ************************************ 00:06:38.424 13:37:04 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:38.424 [2024-07-15 13:37:04.743251] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:38.424 [2024-07-15 13:37:04.743346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886958 ] 00:06:38.424 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.424 [2024-07-15 13:37:04.805331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.424 [2024-07-15 13:37:04.872789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:38.424 13:37:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:38.425 13:37:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.810 13:37:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.810 13:37:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.810 13:37:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.810 13:37:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.810 13:37:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.810 13:37:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.810 13:37:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.810 13:37:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.810 13:37:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.810 13:37:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.810 13:37:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.810 13:37:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.810 13:37:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.810 13:37:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.810 13:37:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.810 13:37:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.810 13:37:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.810 13:37:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.810 13:37:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.810 13:37:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.810 13:37:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.810 13:37:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.810 13:37:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.810 13:37:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.810 13:37:06 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.810 13:37:06 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:39.810 13:37:06 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.810 00:06:39.810 real 0m1.288s 00:06:39.810 user 0m1.195s 00:06:39.810 sys 0m0.103s 00:06:39.810 13:37:06 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.810 13:37:06 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:39.810 ************************************ 00:06:39.810 END TEST accel_dualcast 00:06:39.810 ************************************ 00:06:39.810 13:37:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.810 13:37:06 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:39.810 13:37:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:39.810 13:37:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.810 13:37:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.810 ************************************ 00:06:39.810 START TEST accel_compare 00:06:39.810 ************************************ 00:06:39.810 13:37:06 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:39.810 13:37:06 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:39.810 13:37:06 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:39.810 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.810 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.810 13:37:06 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:39.811 [2024-07-15 13:37:06.107659] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:39.811 [2024-07-15 13:37:06.107757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid887314 ] 00:06:39.811 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.811 [2024-07-15 13:37:06.168001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.811 [2024-07-15 13:37:06.231605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.811 13:37:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:41.195 13:37:07 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.195 00:06:41.195 real 0m1.283s 00:06:41.195 user 0m1.204s 00:06:41.195 sys 0m0.090s 00:06:41.195 13:37:07 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.195 13:37:07 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:41.195 ************************************ 00:06:41.195 END TEST accel_compare 00:06:41.195 ************************************ 00:06:41.195 13:37:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.195 13:37:07 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:41.195 13:37:07 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:41.195 13:37:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.195 13:37:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.195 ************************************ 00:06:41.195 START TEST accel_xor 00:06:41.195 ************************************ 00:06:41.195 13:37:07 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:41.195 [2024-07-15 13:37:07.465214] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:41.195 [2024-07-15 13:37:07.465337] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid887663 ] 00:06:41.195 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.195 [2024-07-15 13:37:07.543476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.195 [2024-07-15 13:37:07.611907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:41.195 13:37:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.196 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.196 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.196 13:37:07 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.196 13:37:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.196 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.196 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.196 13:37:07 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:41.196 13:37:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.196 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.196 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.196 13:37:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.196 13:37:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.196 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.196 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.196 13:37:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.196 13:37:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.196 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.196 13:37:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.598 00:06:42.598 real 0m1.305s 00:06:42.598 user 0m1.201s 00:06:42.598 sys 0m0.117s 00:06:42.598 13:37:08 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.598 13:37:08 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:42.598 ************************************ 00:06:42.598 END TEST accel_xor 00:06:42.598 ************************************ 00:06:42.598 13:37:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.598 13:37:08 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:42.598 13:37:08 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:42.598 13:37:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.598 13:37:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.598 ************************************ 00:06:42.598 START TEST accel_xor 00:06:42.598 ************************************ 00:06:42.598 13:37:08 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:42.598 13:37:08 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:42.598 [2024-07-15 13:37:08.844841] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:42.598 [2024-07-15 13:37:08.844909] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid887882 ] 00:06:42.598 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.598 [2024-07-15 13:37:08.908241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.598 [2024-07-15 13:37:08.979216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.598 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.599 13:37:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:43.985 13:37:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.985 00:06:43.985 real 0m1.298s 00:06:43.985 user 0m1.205s 00:06:43.985 sys 0m0.104s 00:06:43.985 13:37:10 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.985 13:37:10 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:43.985 ************************************ 00:06:43.985 END TEST accel_xor 00:06:43.985 ************************************ 00:06:43.985 13:37:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.985 13:37:10 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:43.985 13:37:10 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:43.985 13:37:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.985 13:37:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.985 ************************************ 00:06:43.985 START TEST accel_dif_verify 00:06:43.985 ************************************ 00:06:43.985 13:37:10 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:43.985 [2024-07-15 13:37:10.217954] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:43.985 [2024-07-15 13:37:10.218031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid888083 ] 00:06:43.985 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.985 [2024-07-15 13:37:10.280398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.985 [2024-07-15 13:37:10.348851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.985 13:37:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.986 13:37:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:45.370 13:37:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.370 00:06:45.370 real 0m1.290s 00:06:45.370 user 0m1.197s 00:06:45.370 sys 0m0.105s 00:06:45.370 13:37:11 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.370 13:37:11 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:45.370 ************************************ 00:06:45.370 END TEST accel_dif_verify 00:06:45.370 ************************************ 00:06:45.370 13:37:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.370 13:37:11 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:45.370 13:37:11 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:45.370 13:37:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.370 13:37:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.370 ************************************ 00:06:45.370 START TEST accel_dif_generate 00:06:45.370 ************************************ 00:06:45.370 13:37:11 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:45.370 [2024-07-15 13:37:11.584427] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:45.370 [2024-07-15 13:37:11.584509] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid888402 ] 00:06:45.370 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.370 [2024-07-15 13:37:11.645336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.370 [2024-07-15 13:37:11.708027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.370 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.371 13:37:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.371 13:37:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.371 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.371 13:37:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.312 13:37:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.313 13:37:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.573 13:37:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.573 13:37:12 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.573 13:37:12 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:46.573 13:37:12 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.573 00:06:46.573 real 0m1.281s 00:06:46.573 user 0m1.198s 00:06:46.573 sys 0m0.095s 00:06:46.573 13:37:12 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.573 13:37:12 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:46.573 ************************************ 00:06:46.573 END TEST accel_dif_generate 00:06:46.573 ************************************ 00:06:46.573 13:37:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.573 13:37:12 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:46.573 13:37:12 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:46.573 13:37:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.573 13:37:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.573 ************************************ 00:06:46.573 START TEST accel_dif_generate_copy 00:06:46.573 ************************************ 00:06:46.573 13:37:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:46.573 13:37:12 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:46.573 13:37:12 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:46.574 13:37:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.574 13:37:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.574 13:37:12 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:46.574 13:37:12 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:46.574 13:37:12 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:46.574 13:37:12 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.574 13:37:12 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.574 13:37:12 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.574 13:37:12 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.574 13:37:12 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.574 13:37:12 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:46.574 13:37:12 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:46.574 [2024-07-15 13:37:12.942594] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:46.574 [2024-07-15 13:37:12.942712] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid888751 ] 00:06:46.574 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.574 [2024-07-15 13:37:13.009749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.574 [2024-07-15 13:37:13.074988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.836 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.837 13:37:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.778 00:06:47.778 real 0m1.294s 00:06:47.778 user 0m1.204s 00:06:47.778 sys 0m0.101s 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.778 13:37:14 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:47.778 ************************************ 00:06:47.778 END TEST accel_dif_generate_copy 00:06:47.778 ************************************ 00:06:47.778 13:37:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.778 13:37:14 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:47.778 13:37:14 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.778 13:37:14 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:47.778 13:37:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.778 13:37:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.778 ************************************ 00:06:47.778 START TEST accel_comp 00:06:47.778 ************************************ 00:06:47.778 13:37:14 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.778 13:37:14 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:47.778 13:37:14 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:47.778 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.778 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.778 13:37:14 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.778 13:37:14 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.778 13:37:14 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:47.778 13:37:14 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.778 13:37:14 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.778 13:37:14 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.778 13:37:14 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.778 13:37:14 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.778 13:37:14 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:47.778 13:37:14 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:48.038 [2024-07-15 13:37:14.306265] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:48.038 [2024-07-15 13:37:14.306336] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid889104 ] 00:06:48.038 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.038 [2024-07-15 13:37:14.366788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.038 [2024-07-15 13:37:14.431083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:48.038 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.039 13:37:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:49.421 13:37:15 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.421 00:06:49.421 real 0m1.284s 00:06:49.421 user 0m1.191s 00:06:49.421 sys 0m0.105s 00:06:49.421 13:37:15 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.421 13:37:15 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:49.421 ************************************ 00:06:49.421 END TEST accel_comp 00:06:49.421 ************************************ 00:06:49.421 13:37:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.421 13:37:15 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:49.421 13:37:15 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:49.421 13:37:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.421 13:37:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.421 ************************************ 00:06:49.421 START TEST accel_decomp 00:06:49.421 ************************************ 00:06:49.421 13:37:15 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:49.421 [2024-07-15 13:37:15.666296] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:49.421 [2024-07-15 13:37:15.666368] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid889379 ] 00:06:49.421 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.421 [2024-07-15 13:37:15.737094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.421 [2024-07-15 13:37:15.805709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.421 13:37:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.802 13:37:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.803 13:37:16 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.803 13:37:16 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:50.803 13:37:16 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.803 00:06:50.803 real 0m1.300s 00:06:50.803 user 0m1.212s 00:06:50.803 sys 0m0.101s 00:06:50.803 13:37:16 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.803 13:37:16 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:50.803 ************************************ 00:06:50.803 END TEST accel_decomp 00:06:50.803 ************************************ 00:06:50.803 13:37:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.803 13:37:16 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.803 13:37:16 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:50.803 13:37:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.803 13:37:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.803 ************************************ 00:06:50.803 START TEST accel_decomp_full 00:06:50.803 ************************************ 00:06:50.803 13:37:17 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:50.803 [2024-07-15 13:37:17.039450] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:50.803 [2024-07-15 13:37:17.039519] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid889572 ] 00:06:50.803 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.803 [2024-07-15 13:37:17.102236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.803 [2024-07-15 13:37:17.171740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.803 13:37:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:52.189 13:37:18 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.189 00:06:52.189 real 0m1.301s 00:06:52.189 user 0m1.204s 00:06:52.189 sys 0m0.110s 00:06:52.190 13:37:18 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.190 13:37:18 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:52.190 ************************************ 00:06:52.190 END TEST accel_decomp_full 00:06:52.190 ************************************ 00:06:52.190 13:37:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.190 13:37:18 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.190 13:37:18 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:52.190 13:37:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.190 13:37:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.190 ************************************ 00:06:52.190 START TEST accel_decomp_mcore 00:06:52.190 ************************************ 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:52.190 [2024-07-15 13:37:18.415328] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:52.190 [2024-07-15 13:37:18.415404] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid889845 ] 00:06:52.190 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.190 [2024-07-15 13:37:18.477191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.190 [2024-07-15 13:37:18.548601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.190 [2024-07-15 13:37:18.548716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.190 [2024-07-15 13:37:18.548873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.190 [2024-07-15 13:37:18.548873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.190 13:37:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.576 00:06:53.576 real 0m1.300s 00:06:53.576 user 0m4.440s 00:06:53.576 sys 0m0.106s 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.576 13:37:19 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:53.576 ************************************ 00:06:53.576 END TEST accel_decomp_mcore 00:06:53.576 ************************************ 00:06:53.576 13:37:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:53.576 13:37:19 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:53.576 13:37:19 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:53.576 13:37:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.576 13:37:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.576 ************************************ 00:06:53.576 START TEST accel_decomp_full_mcore 00:06:53.576 ************************************ 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:53.576 [2024-07-15 13:37:19.791964] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:53.576 [2024-07-15 13:37:19.792022] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid890195 ] 00:06:53.576 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.576 [2024-07-15 13:37:19.852923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:53.576 [2024-07-15 13:37:19.919052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.576 [2024-07-15 13:37:19.919188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.576 [2024-07-15 13:37:19.919249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.576 [2024-07-15 13:37:19.919249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.576 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.577 13:37:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.967 00:06:54.967 real 0m1.310s 00:06:54.967 user 0m4.487s 00:06:54.967 sys 0m0.109s 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.967 13:37:21 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:54.967 ************************************ 00:06:54.967 END TEST accel_decomp_full_mcore 00:06:54.967 ************************************ 00:06:54.967 13:37:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.967 13:37:21 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:54.967 13:37:21 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:54.968 13:37:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.968 13:37:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.968 ************************************ 00:06:54.968 START TEST accel_decomp_mthread 00:06:54.968 ************************************ 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:54.968 [2024-07-15 13:37:21.174972] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:54.968 [2024-07-15 13:37:21.175066] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid890548 ] 00:06:54.968 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.968 [2024-07-15 13:37:21.235751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.968 [2024-07-15 13:37:21.302060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.968 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.969 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.969 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.969 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.969 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.969 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.969 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.969 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.969 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.969 13:37:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.950 00:06:55.950 real 0m1.292s 00:06:55.950 user 0m1.197s 00:06:55.950 sys 0m0.109s 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.950 13:37:22 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:55.951 ************************************ 00:06:55.951 END TEST accel_decomp_mthread 00:06:55.951 ************************************ 00:06:56.212 13:37:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.212 13:37:22 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.212 13:37:22 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:56.212 13:37:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.212 13:37:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.212 ************************************ 00:06:56.212 START TEST accel_decomp_full_mthread 00:06:56.212 ************************************ 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:56.212 [2024-07-15 13:37:22.540600] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:56.212 [2024-07-15 13:37:22.540666] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid890884 ] 00:06:56.212 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.212 [2024-07-15 13:37:22.603074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.212 [2024-07-15 13:37:22.672684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.212 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.213 13:37:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.594 00:06:57.594 real 0m1.323s 00:06:57.594 user 0m1.230s 00:06:57.594 sys 0m0.106s 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.594 13:37:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:57.594 ************************************ 00:06:57.594 END TEST accel_decomp_full_mthread 00:06:57.594 ************************************ 00:06:57.594 13:37:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.594 13:37:23 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:57.595 13:37:23 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:57.595 13:37:23 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:57.595 13:37:23 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:57.595 13:37:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.595 13:37:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.595 13:37:23 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.595 13:37:23 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.595 13:37:23 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.595 13:37:23 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.595 13:37:23 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.595 13:37:23 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:57.595 13:37:23 accel -- accel/accel.sh@41 -- # jq -r . 00:06:57.595 ************************************ 00:06:57.595 START TEST accel_dif_functional_tests 00:06:57.595 ************************************ 00:06:57.595 13:37:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:57.595 [2024-07-15 13:37:23.961065] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:57.595 [2024-07-15 13:37:23.961118] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891095 ] 00:06:57.595 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.595 [2024-07-15 13:37:24.022906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.595 [2024-07-15 13:37:24.096019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.595 [2024-07-15 13:37:24.096152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.595 [2024-07-15 13:37:24.096178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.856 00:06:57.856 00:06:57.856 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.856 http://cunit.sourceforge.net/ 00:06:57.856 00:06:57.856 00:06:57.856 Suite: accel_dif 00:06:57.856 Test: verify: DIF generated, GUARD check ...passed 00:06:57.856 Test: verify: DIF generated, APPTAG check ...passed 00:06:57.856 Test: verify: DIF generated, REFTAG check ...passed 00:06:57.856 Test: verify: DIF not generated, GUARD check ...[2024-07-15 13:37:24.151718] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:57.856 passed 00:06:57.856 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 13:37:24.151763] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:57.856 passed 00:06:57.856 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 13:37:24.151784] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:57.856 passed 00:06:57.856 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:57.856 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 13:37:24.151830] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:57.856 passed 00:06:57.856 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:57.856 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:57.856 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:57.856 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 13:37:24.151945] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:57.856 passed 00:06:57.856 Test: verify copy: DIF generated, GUARD check ...passed 00:06:57.856 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:57.856 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:57.856 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 13:37:24.152064] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:57.856 passed 00:06:57.856 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 13:37:24.152087] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:57.856 passed 00:06:57.856 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 13:37:24.152108] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:57.856 passed 00:06:57.856 Test: generate copy: DIF generated, GUARD check ...passed 00:06:57.856 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:57.856 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:57.856 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:57.856 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:57.856 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:57.856 Test: generate copy: iovecs-len validate ...[2024-07-15 13:37:24.152303] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:57.856 passed 00:06:57.856 Test: generate copy: buffer alignment validate ...passed 00:06:57.856 00:06:57.856 Run Summary: Type Total Ran Passed Failed Inactive 00:06:57.856 suites 1 1 n/a 0 0 00:06:57.856 tests 26 26 26 0 0 00:06:57.856 asserts 115 115 115 0 n/a 00:06:57.856 00:06:57.856 Elapsed time = 0.002 seconds 00:06:57.856 00:06:57.856 real 0m0.358s 00:06:57.856 user 0m0.499s 00:06:57.856 sys 0m0.124s 00:06:57.856 13:37:24 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.856 13:37:24 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:57.856 ************************************ 00:06:57.856 END TEST accel_dif_functional_tests 00:06:57.856 ************************************ 00:06:57.856 13:37:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.856 00:06:57.856 real 0m30.063s 00:06:57.856 user 0m33.667s 00:06:57.856 sys 0m4.127s 00:06:57.856 13:37:24 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.856 13:37:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.856 ************************************ 00:06:57.856 END TEST accel 00:06:57.856 ************************************ 00:06:57.856 13:37:24 -- common/autotest_common.sh@1142 -- # return 0 00:06:57.856 13:37:24 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:57.856 13:37:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.856 13:37:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.856 13:37:24 -- common/autotest_common.sh@10 -- # set +x 00:06:58.118 ************************************ 00:06:58.118 START TEST accel_rpc 00:06:58.118 ************************************ 00:06:58.118 13:37:24 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:58.118 * Looking for test storage... 00:06:58.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:58.118 13:37:24 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:58.118 13:37:24 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=891331 00:06:58.118 13:37:24 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 891331 00:06:58.118 13:37:24 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:58.118 13:37:24 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 891331 ']' 00:06:58.118 13:37:24 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.118 13:37:24 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.118 13:37:24 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.118 13:37:24 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.118 13:37:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.118 [2024-07-15 13:37:24.541792] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:58.118 [2024-07-15 13:37:24.541863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891331 ] 00:06:58.118 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.118 [2024-07-15 13:37:24.604588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.378 [2024-07-15 13:37:24.678670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.949 13:37:25 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.949 13:37:25 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:58.949 13:37:25 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:58.949 13:37:25 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:58.949 13:37:25 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:58.949 13:37:25 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:58.949 13:37:25 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:58.949 13:37:25 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.949 13:37:25 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.949 13:37:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.949 ************************************ 00:06:58.949 START TEST accel_assign_opcode 00:06:58.949 ************************************ 00:06:58.949 13:37:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:58.949 13:37:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:58.949 13:37:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.949 13:37:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:58.949 [2024-07-15 13:37:25.332606] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:58.949 13:37:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.949 13:37:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:58.949 13:37:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.949 13:37:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:58.949 [2024-07-15 13:37:25.344633] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:58.949 13:37:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.949 13:37:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:58.949 13:37:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.949 13:37:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:59.210 13:37:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.210 13:37:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:59.210 13:37:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:59.210 13:37:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.210 13:37:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:59.210 13:37:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:59.210 13:37:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.210 software 00:06:59.210 00:06:59.210 real 0m0.221s 00:06:59.210 user 0m0.053s 00:06:59.210 sys 0m0.009s 00:06:59.210 13:37:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.210 13:37:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:59.210 ************************************ 00:06:59.210 END TEST accel_assign_opcode 00:06:59.210 ************************************ 00:06:59.210 13:37:25 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:59.210 13:37:25 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 891331 00:06:59.210 13:37:25 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 891331 ']' 00:06:59.210 13:37:25 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 891331 00:06:59.210 13:37:25 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:59.210 13:37:25 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.210 13:37:25 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 891331 00:06:59.210 13:37:25 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:59.210 13:37:25 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:59.210 13:37:25 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 891331' 00:06:59.210 killing process with pid 891331 00:06:59.210 13:37:25 accel_rpc -- common/autotest_common.sh@967 -- # kill 891331 00:06:59.210 13:37:25 accel_rpc -- common/autotest_common.sh@972 -- # wait 891331 00:06:59.471 00:06:59.471 real 0m1.467s 00:06:59.471 user 0m1.541s 00:06:59.471 sys 0m0.403s 00:06:59.471 13:37:25 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.471 13:37:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.471 ************************************ 00:06:59.471 END TEST accel_rpc 00:06:59.471 ************************************ 00:06:59.471 13:37:25 -- common/autotest_common.sh@1142 -- # return 0 00:06:59.471 13:37:25 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:59.471 13:37:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.471 13:37:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.471 13:37:25 -- common/autotest_common.sh@10 -- # set +x 00:06:59.471 ************************************ 00:06:59.471 START TEST app_cmdline 00:06:59.471 ************************************ 00:06:59.471 13:37:25 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:59.732 * Looking for test storage... 00:06:59.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:59.732 13:37:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:59.732 13:37:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=891733 00:06:59.732 13:37:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 891733 00:06:59.732 13:37:26 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:59.732 13:37:26 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 891733 ']' 00:06:59.732 13:37:26 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.732 13:37:26 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.732 13:37:26 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.732 13:37:26 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.732 13:37:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.732 [2024-07-15 13:37:26.095792] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:59.732 [2024-07-15 13:37:26.095868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891733 ] 00:06:59.732 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.732 [2024-07-15 13:37:26.158847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.732 [2024-07-15 13:37:26.233370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.674 13:37:26 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.674 13:37:26 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:00.674 13:37:26 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:00.674 { 00:07:00.674 "version": "SPDK v24.09-pre git sha1 2728651ee", 00:07:00.674 "fields": { 00:07:00.674 "major": 24, 00:07:00.674 "minor": 9, 00:07:00.674 "patch": 0, 00:07:00.674 "suffix": "-pre", 00:07:00.674 "commit": "2728651ee" 00:07:00.674 } 00:07:00.674 } 00:07:00.674 13:37:27 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:00.674 13:37:27 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:00.674 13:37:27 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:00.674 13:37:27 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:00.674 13:37:27 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:00.674 13:37:27 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:00.674 13:37:27 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.674 13:37:27 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:00.674 13:37:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.674 13:37:27 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.674 13:37:27 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:00.674 13:37:27 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:00.674 13:37:27 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.674 13:37:27 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:00.674 13:37:27 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.674 13:37:27 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.674 13:37:27 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.674 13:37:27 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.674 13:37:27 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.674 13:37:27 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.674 13:37:27 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.674 13:37:27 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.674 13:37:27 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:00.674 13:37:27 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.935 request: 00:07:00.935 { 00:07:00.935 "method": "env_dpdk_get_mem_stats", 00:07:00.935 "req_id": 1 00:07:00.935 } 00:07:00.935 Got JSON-RPC error response 00:07:00.935 response: 00:07:00.935 { 00:07:00.935 "code": -32601, 00:07:00.935 "message": "Method not found" 00:07:00.935 } 00:07:00.935 13:37:27 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:00.935 13:37:27 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.935 13:37:27 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:00.935 13:37:27 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.935 13:37:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 891733 00:07:00.935 13:37:27 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 891733 ']' 00:07:00.935 13:37:27 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 891733 00:07:00.935 13:37:27 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:00.935 13:37:27 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.935 13:37:27 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 891733 00:07:00.935 13:37:27 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.935 13:37:27 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.935 13:37:27 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 891733' 00:07:00.935 killing process with pid 891733 00:07:00.935 13:37:27 app_cmdline -- common/autotest_common.sh@967 -- # kill 891733 00:07:00.935 13:37:27 app_cmdline -- common/autotest_common.sh@972 -- # wait 891733 00:07:01.195 00:07:01.195 real 0m1.556s 00:07:01.195 user 0m1.859s 00:07:01.195 sys 0m0.416s 00:07:01.195 13:37:27 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.195 13:37:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.195 ************************************ 00:07:01.195 END TEST app_cmdline 00:07:01.195 ************************************ 00:07:01.195 13:37:27 -- common/autotest_common.sh@1142 -- # return 0 00:07:01.195 13:37:27 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:01.195 13:37:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:01.195 13:37:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.195 13:37:27 -- common/autotest_common.sh@10 -- # set +x 00:07:01.196 ************************************ 00:07:01.196 START TEST version 00:07:01.196 ************************************ 00:07:01.196 13:37:27 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:01.196 * Looking for test storage... 00:07:01.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:01.196 13:37:27 version -- app/version.sh@17 -- # get_header_version major 00:07:01.196 13:37:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.196 13:37:27 version -- app/version.sh@14 -- # cut -f2 00:07:01.196 13:37:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.196 13:37:27 version -- app/version.sh@17 -- # major=24 00:07:01.196 13:37:27 version -- app/version.sh@18 -- # get_header_version minor 00:07:01.196 13:37:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.196 13:37:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.196 13:37:27 version -- app/version.sh@14 -- # cut -f2 00:07:01.196 13:37:27 version -- app/version.sh@18 -- # minor=9 00:07:01.196 13:37:27 version -- app/version.sh@19 -- # get_header_version patch 00:07:01.196 13:37:27 version -- app/version.sh@14 -- # cut -f2 00:07:01.196 13:37:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.196 13:37:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.196 13:37:27 version -- app/version.sh@19 -- # patch=0 00:07:01.196 13:37:27 version -- app/version.sh@20 -- # get_header_version suffix 00:07:01.196 13:37:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.196 13:37:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.196 13:37:27 version -- app/version.sh@14 -- # cut -f2 00:07:01.196 13:37:27 version -- app/version.sh@20 -- # suffix=-pre 00:07:01.196 13:37:27 version -- app/version.sh@22 -- # version=24.9 00:07:01.196 13:37:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:01.196 13:37:27 version -- app/version.sh@28 -- # version=24.9rc0 00:07:01.196 13:37:27 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:01.196 13:37:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:01.456 13:37:27 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:01.456 13:37:27 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:01.456 00:07:01.456 real 0m0.179s 00:07:01.456 user 0m0.086s 00:07:01.456 sys 0m0.126s 00:07:01.456 13:37:27 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.456 13:37:27 version -- common/autotest_common.sh@10 -- # set +x 00:07:01.456 ************************************ 00:07:01.456 END TEST version 00:07:01.456 ************************************ 00:07:01.456 13:37:27 -- common/autotest_common.sh@1142 -- # return 0 00:07:01.456 13:37:27 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:01.456 13:37:27 -- spdk/autotest.sh@198 -- # uname -s 00:07:01.456 13:37:27 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:01.456 13:37:27 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:01.456 13:37:27 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:01.456 13:37:27 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:01.456 13:37:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:01.456 13:37:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:01.456 13:37:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:01.456 13:37:27 -- common/autotest_common.sh@10 -- # set +x 00:07:01.456 13:37:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:01.456 13:37:27 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:01.456 13:37:27 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:01.456 13:37:27 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:01.456 13:37:27 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:01.456 13:37:27 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:01.456 13:37:27 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:01.456 13:37:27 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:01.456 13:37:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.456 13:37:27 -- common/autotest_common.sh@10 -- # set +x 00:07:01.456 ************************************ 00:07:01.456 START TEST nvmf_tcp 00:07:01.456 ************************************ 00:07:01.456 13:37:27 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:01.456 * Looking for test storage... 00:07:01.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:01.456 13:37:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:01.456 13:37:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:01.456 13:37:27 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.456 13:37:27 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:01.456 13:37:27 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.456 13:37:27 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.456 13:37:27 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.456 13:37:27 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.456 13:37:27 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.456 13:37:27 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.456 13:37:27 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.456 13:37:27 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.456 13:37:27 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.456 13:37:27 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.718 13:37:27 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:01.718 13:37:27 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:01.718 13:37:27 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.718 13:37:27 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.718 13:37:27 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.718 13:37:27 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.718 13:37:27 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.718 13:37:27 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.718 13:37:27 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.718 13:37:27 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.718 13:37:27 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.718 13:37:27 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.718 13:37:27 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.718 13:37:27 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:01.718 13:37:27 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.718 13:37:27 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:01.718 13:37:27 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:01.718 13:37:27 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:01.718 13:37:27 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.718 13:37:27 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.718 13:37:27 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.718 13:37:27 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:01.718 13:37:27 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:01.718 13:37:27 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:01.718 13:37:27 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:01.719 13:37:27 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:01.719 13:37:27 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:01.719 13:37:27 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:01.719 13:37:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.719 13:37:28 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:01.719 13:37:28 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:01.719 13:37:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:01.719 13:37:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.719 13:37:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.719 ************************************ 00:07:01.719 START TEST nvmf_example 00:07:01.719 ************************************ 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:01.719 * Looking for test storage... 00:07:01.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:01.719 13:37:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:09.862 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:09.862 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:09.862 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:09.862 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:09.862 13:37:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:09.862 13:37:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:09.862 13:37:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:09.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:09.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:07:09.863 00:07:09.863 --- 10.0.0.2 ping statistics --- 00:07:09.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.863 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:09.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:09.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:07:09.863 00:07:09.863 --- 10.0.0.1 ping statistics --- 00:07:09.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.863 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=895831 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 895831 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 895831 ']' 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.863 13:37:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.863 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:09.863 13:37:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:09.863 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.090 Initializing NVMe Controllers 00:07:22.090 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:22.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:22.090 Initialization complete. Launching workers. 00:07:22.090 ======================================================== 00:07:22.090 Latency(us) 00:07:22.090 Device Information : IOPS MiB/s Average min max 00:07:22.090 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17307.33 67.61 3697.51 826.25 15266.36 00:07:22.090 ======================================================== 00:07:22.090 Total : 17307.33 67.61 3697.51 826.25 15266.36 00:07:22.090 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:22.090 rmmod nvme_tcp 00:07:22.090 rmmod nvme_fabrics 00:07:22.090 rmmod nvme_keyring 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 895831 ']' 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 895831 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 895831 ']' 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 895831 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 895831 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 895831' 00:07:22.090 killing process with pid 895831 00:07:22.090 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 895831 00:07:22.091 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 895831 00:07:22.091 nvmf threads initialize successfully 00:07:22.091 bdev subsystem init successfully 00:07:22.091 created a nvmf target service 00:07:22.091 create targets's poll groups done 00:07:22.091 all subsystems of target started 00:07:22.091 nvmf target is running 00:07:22.091 all subsystems of target stopped 00:07:22.091 destroy targets's poll groups done 00:07:22.091 destroyed the nvmf target service 00:07:22.091 bdev subsystem finish successfully 00:07:22.091 nvmf threads destroy successfully 00:07:22.091 13:37:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:22.091 13:37:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:22.091 13:37:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:22.091 13:37:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:22.091 13:37:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:22.091 13:37:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.091 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.091 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.350 13:37:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:22.351 13:37:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:22.351 13:37:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:22.351 13:37:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.351 00:07:22.351 real 0m20.825s 00:07:22.351 user 0m46.815s 00:07:22.351 sys 0m6.285s 00:07:22.351 13:37:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.351 13:37:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.351 ************************************ 00:07:22.351 END TEST nvmf_example 00:07:22.351 ************************************ 00:07:22.613 13:37:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:22.613 13:37:48 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:22.613 13:37:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:22.613 13:37:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.613 13:37:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:22.613 ************************************ 00:07:22.613 START TEST nvmf_filesystem 00:07:22.613 ************************************ 00:07:22.613 13:37:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:22.613 * Looking for test storage... 00:07:22.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:22.613 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:22.614 #define SPDK_CONFIG_H 00:07:22.614 #define SPDK_CONFIG_APPS 1 00:07:22.614 #define SPDK_CONFIG_ARCH native 00:07:22.614 #undef SPDK_CONFIG_ASAN 00:07:22.614 #undef SPDK_CONFIG_AVAHI 00:07:22.614 #undef SPDK_CONFIG_CET 00:07:22.614 #define SPDK_CONFIG_COVERAGE 1 00:07:22.614 #define SPDK_CONFIG_CROSS_PREFIX 00:07:22.614 #undef SPDK_CONFIG_CRYPTO 00:07:22.614 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:22.614 #undef SPDK_CONFIG_CUSTOMOCF 00:07:22.614 #undef SPDK_CONFIG_DAOS 00:07:22.614 #define SPDK_CONFIG_DAOS_DIR 00:07:22.614 #define SPDK_CONFIG_DEBUG 1 00:07:22.614 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:22.614 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:22.614 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:22.614 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:22.614 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:22.614 #undef SPDK_CONFIG_DPDK_UADK 00:07:22.614 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:22.614 #define SPDK_CONFIG_EXAMPLES 1 00:07:22.614 #undef SPDK_CONFIG_FC 00:07:22.614 #define SPDK_CONFIG_FC_PATH 00:07:22.614 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:22.614 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:22.614 #undef SPDK_CONFIG_FUSE 00:07:22.614 #undef SPDK_CONFIG_FUZZER 00:07:22.614 #define SPDK_CONFIG_FUZZER_LIB 00:07:22.614 #undef SPDK_CONFIG_GOLANG 00:07:22.614 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:22.614 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:22.614 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:22.614 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:22.614 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:22.614 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:22.614 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:22.614 #define SPDK_CONFIG_IDXD 1 00:07:22.614 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:22.614 #undef SPDK_CONFIG_IPSEC_MB 00:07:22.614 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:22.614 #define SPDK_CONFIG_ISAL 1 00:07:22.614 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:22.614 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:22.614 #define SPDK_CONFIG_LIBDIR 00:07:22.614 #undef SPDK_CONFIG_LTO 00:07:22.614 #define SPDK_CONFIG_MAX_LCORES 128 00:07:22.614 #define SPDK_CONFIG_NVME_CUSE 1 00:07:22.614 #undef SPDK_CONFIG_OCF 00:07:22.614 #define SPDK_CONFIG_OCF_PATH 00:07:22.614 #define SPDK_CONFIG_OPENSSL_PATH 00:07:22.614 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:22.614 #define SPDK_CONFIG_PGO_DIR 00:07:22.614 #undef SPDK_CONFIG_PGO_USE 00:07:22.614 #define SPDK_CONFIG_PREFIX /usr/local 00:07:22.614 #undef SPDK_CONFIG_RAID5F 00:07:22.614 #undef SPDK_CONFIG_RBD 00:07:22.614 #define SPDK_CONFIG_RDMA 1 00:07:22.614 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:22.614 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:22.614 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:22.614 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:22.614 #define SPDK_CONFIG_SHARED 1 00:07:22.614 #undef SPDK_CONFIG_SMA 00:07:22.614 #define SPDK_CONFIG_TESTS 1 00:07:22.614 #undef SPDK_CONFIG_TSAN 00:07:22.614 #define SPDK_CONFIG_UBLK 1 00:07:22.614 #define SPDK_CONFIG_UBSAN 1 00:07:22.614 #undef SPDK_CONFIG_UNIT_TESTS 00:07:22.614 #undef SPDK_CONFIG_URING 00:07:22.614 #define SPDK_CONFIG_URING_PATH 00:07:22.614 #undef SPDK_CONFIG_URING_ZNS 00:07:22.614 #undef SPDK_CONFIG_USDT 00:07:22.614 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:22.614 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:22.614 #define SPDK_CONFIG_VFIO_USER 1 00:07:22.614 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:22.614 #define SPDK_CONFIG_VHOST 1 00:07:22.614 #define SPDK_CONFIG_VIRTIO 1 00:07:22.614 #undef SPDK_CONFIG_VTUNE 00:07:22.614 #define SPDK_CONFIG_VTUNE_DIR 00:07:22.614 #define SPDK_CONFIG_WERROR 1 00:07:22.614 #define SPDK_CONFIG_WPDK_DIR 00:07:22.614 #undef SPDK_CONFIG_XNVME 00:07:22.614 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:22.614 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:22.615 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:22.877 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:22.877 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:22.877 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 898637 ]] 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 898637 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.HrqeJF 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.HrqeJF/tests/target /tmp/spdk.HrqeJF 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=954236928 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330192896 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=118552641536 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129371013120 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10818371584 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680796160 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864503296 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874202624 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9699328 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:22.878 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=216064 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=287744 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684212224 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1294336 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937097216 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937101312 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:22.879 * Looking for test storage... 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=118552641536 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=13032964096 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:22.879 13:37:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:31.024 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:31.024 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:31.024 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:31.024 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.024 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:31.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:07:31.025 00:07:31.025 --- 10.0.0.2 ping statistics --- 00:07:31.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.025 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:31.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:07:31.025 00:07:31.025 --- 10.0.0.1 ping statistics --- 00:07:31.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.025 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.025 ************************************ 00:07:31.025 START TEST nvmf_filesystem_no_in_capsule 00:07:31.025 ************************************ 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=902291 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 902291 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 902291 ']' 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.025 13:37:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.025 [2024-07-15 13:37:56.493577] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:31.025 [2024-07-15 13:37:56.493636] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.025 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.025 [2024-07-15 13:37:56.563401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.025 [2024-07-15 13:37:56.642248] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.025 [2024-07-15 13:37:56.642288] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.025 [2024-07-15 13:37:56.642296] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.025 [2024-07-15 13:37:56.642303] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.025 [2024-07-15 13:37:56.642309] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.025 [2024-07-15 13:37:56.642445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.025 [2024-07-15 13:37:56.642560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.025 [2024-07-15 13:37:56.642701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.025 [2024-07-15 13:37:56.642703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.025 [2024-07-15 13:37:57.320788] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.025 Malloc1 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.025 [2024-07-15 13:37:57.447524] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:31.025 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:31.026 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:31.026 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:31.026 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.026 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.026 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.026 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:31.026 { 00:07:31.026 "name": "Malloc1", 00:07:31.026 "aliases": [ 00:07:31.026 "8d628a70-009e-4767-a34e-8ebc58fc2b90" 00:07:31.026 ], 00:07:31.026 "product_name": "Malloc disk", 00:07:31.026 "block_size": 512, 00:07:31.026 "num_blocks": 1048576, 00:07:31.026 "uuid": "8d628a70-009e-4767-a34e-8ebc58fc2b90", 00:07:31.026 "assigned_rate_limits": { 00:07:31.026 "rw_ios_per_sec": 0, 00:07:31.026 "rw_mbytes_per_sec": 0, 00:07:31.026 "r_mbytes_per_sec": 0, 00:07:31.026 "w_mbytes_per_sec": 0 00:07:31.026 }, 00:07:31.026 "claimed": true, 00:07:31.026 "claim_type": "exclusive_write", 00:07:31.026 "zoned": false, 00:07:31.026 "supported_io_types": { 00:07:31.026 "read": true, 00:07:31.026 "write": true, 00:07:31.026 "unmap": true, 00:07:31.026 "flush": true, 00:07:31.026 "reset": true, 00:07:31.026 "nvme_admin": false, 00:07:31.026 "nvme_io": false, 00:07:31.026 "nvme_io_md": false, 00:07:31.026 "write_zeroes": true, 00:07:31.026 "zcopy": true, 00:07:31.026 "get_zone_info": false, 00:07:31.026 "zone_management": false, 00:07:31.026 "zone_append": false, 00:07:31.026 "compare": false, 00:07:31.026 "compare_and_write": false, 00:07:31.026 "abort": true, 00:07:31.026 "seek_hole": false, 00:07:31.026 "seek_data": false, 00:07:31.026 "copy": true, 00:07:31.026 "nvme_iov_md": false 00:07:31.026 }, 00:07:31.026 "memory_domains": [ 00:07:31.026 { 00:07:31.026 "dma_device_id": "system", 00:07:31.026 "dma_device_type": 1 00:07:31.026 }, 00:07:31.026 { 00:07:31.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.026 "dma_device_type": 2 00:07:31.026 } 00:07:31.026 ], 00:07:31.026 "driver_specific": {} 00:07:31.026 } 00:07:31.026 ]' 00:07:31.026 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:31.026 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:31.026 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:31.361 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:31.361 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:31.361 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:31.361 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:31.361 13:37:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:32.773 13:37:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:32.773 13:37:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:32.773 13:37:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:32.773 13:37:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:32.773 13:37:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:34.684 13:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:34.685 13:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:34.685 13:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:34.685 13:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:34.685 13:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:34.685 13:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:34.685 13:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:34.685 13:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:34.685 13:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:34.685 13:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:34.685 13:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:34.685 13:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:34.685 13:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:34.685 13:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:34.685 13:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:34.685 13:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:34.685 13:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:34.945 13:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:35.515 13:38:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:36.456 13:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:36.456 13:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:36.456 13:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:36.456 13:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.456 13:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.456 ************************************ 00:07:36.456 START TEST filesystem_ext4 00:07:36.456 ************************************ 00:07:36.456 13:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:36.456 13:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:36.456 13:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:36.456 13:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:36.456 13:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:36.456 13:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:36.456 13:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:36.456 13:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:36.456 13:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:36.456 13:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:36.456 13:38:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:36.456 mke2fs 1.46.5 (30-Dec-2021) 00:07:36.717 Discarding device blocks: 0/522240 done 00:07:36.717 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:36.717 Filesystem UUID: 58d976f5-cf14-40a3-826e-d086ad6388e6 00:07:36.717 Superblock backups stored on blocks: 00:07:36.717 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:36.717 00:07:36.717 Allocating group tables: 0/64 done 00:07:36.717 Writing inode tables: 0/64 done 00:07:36.717 Creating journal (8192 blocks): done 00:07:36.717 Writing superblocks and filesystem accounting information: 0/64 done 00:07:36.717 00:07:36.717 13:38:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:36.717 13:38:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:37.658 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:37.658 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:37.658 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:37.658 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:37.658 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:37.658 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 902291 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:37.919 00:07:37.919 real 0m1.247s 00:07:37.919 user 0m0.029s 00:07:37.919 sys 0m0.068s 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:37.919 ************************************ 00:07:37.919 END TEST filesystem_ext4 00:07:37.919 ************************************ 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.919 ************************************ 00:07:37.919 START TEST filesystem_btrfs 00:07:37.919 ************************************ 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:37.919 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:38.178 btrfs-progs v6.6.2 00:07:38.178 See https://btrfs.readthedocs.io for more information. 00:07:38.178 00:07:38.178 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:38.178 NOTE: several default settings have changed in version 5.15, please make sure 00:07:38.178 this does not affect your deployments: 00:07:38.178 - DUP for metadata (-m dup) 00:07:38.178 - enabled no-holes (-O no-holes) 00:07:38.178 - enabled free-space-tree (-R free-space-tree) 00:07:38.178 00:07:38.178 Label: (null) 00:07:38.178 UUID: 4fb678dd-8da3-426f-84d0-560a8e1ac65b 00:07:38.178 Node size: 16384 00:07:38.178 Sector size: 4096 00:07:38.178 Filesystem size: 510.00MiB 00:07:38.178 Block group profiles: 00:07:38.178 Data: single 8.00MiB 00:07:38.178 Metadata: DUP 32.00MiB 00:07:38.178 System: DUP 8.00MiB 00:07:38.178 SSD detected: yes 00:07:38.178 Zoned device: no 00:07:38.178 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:38.178 Runtime features: free-space-tree 00:07:38.178 Checksum: crc32c 00:07:38.178 Number of devices: 1 00:07:38.178 Devices: 00:07:38.178 ID SIZE PATH 00:07:38.178 1 510.00MiB /dev/nvme0n1p1 00:07:38.178 00:07:38.178 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:38.178 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:38.438 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:38.438 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:38.438 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:38.438 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:38.438 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:38.438 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:38.698 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 902291 00:07:38.698 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:38.698 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:38.698 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:38.698 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:38.698 00:07:38.698 real 0m0.708s 00:07:38.698 user 0m0.035s 00:07:38.698 sys 0m0.127s 00:07:38.699 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.699 13:38:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:38.699 ************************************ 00:07:38.699 END TEST filesystem_btrfs 00:07:38.699 ************************************ 00:07:38.699 13:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:38.699 13:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:38.699 13:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:38.699 13:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.699 13:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.699 ************************************ 00:07:38.699 START TEST filesystem_xfs 00:07:38.699 ************************************ 00:07:38.699 13:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:38.699 13:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:38.699 13:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:38.699 13:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:38.699 13:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:38.699 13:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:38.699 13:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:38.699 13:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:38.699 13:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:38.699 13:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:38.699 13:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:38.699 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:38.699 = sectsz=512 attr=2, projid32bit=1 00:07:38.699 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:38.699 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:38.699 data = bsize=4096 blocks=130560, imaxpct=25 00:07:38.699 = sunit=0 swidth=0 blks 00:07:38.699 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:38.699 log =internal log bsize=4096 blocks=16384, version=2 00:07:38.699 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:38.699 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:39.639 Discarding blocks...Done. 00:07:39.639 13:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:39.639 13:38:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:41.554 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:41.554 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:41.554 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:41.554 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:41.554 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:41.554 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:41.554 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 902291 00:07:41.554 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:41.554 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:41.554 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:41.554 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.554 00:07:41.554 real 0m2.968s 00:07:41.554 user 0m0.029s 00:07:41.554 sys 0m0.075s 00:07:41.554 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.554 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:41.554 ************************************ 00:07:41.554 END TEST filesystem_xfs 00:07:41.554 ************************************ 00:07:41.555 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:41.815 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:42.076 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:42.076 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:42.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.337 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:42.337 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:42.337 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:42.337 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.337 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:42.337 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.337 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:42.337 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:42.337 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.337 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.337 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.337 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:42.337 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 902291 00:07:42.338 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 902291 ']' 00:07:42.338 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 902291 00:07:42.338 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:42.338 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:42.338 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 902291 00:07:42.338 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:42.338 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:42.338 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 902291' 00:07:42.338 killing process with pid 902291 00:07:42.338 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 902291 00:07:42.338 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 902291 00:07:42.598 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:42.598 00:07:42.598 real 0m12.615s 00:07:42.598 user 0m49.639s 00:07:42.598 sys 0m1.249s 00:07:42.598 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.598 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.598 ************************************ 00:07:42.598 END TEST nvmf_filesystem_no_in_capsule 00:07:42.598 ************************************ 00:07:42.598 13:38:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:42.598 13:38:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:42.598 13:38:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:42.598 13:38:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.598 13:38:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.598 ************************************ 00:07:42.598 START TEST nvmf_filesystem_in_capsule 00:07:42.598 ************************************ 00:07:42.598 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:42.860 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:42.860 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:42.860 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:42.860 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:42.860 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.860 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=905162 00:07:42.860 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 905162 00:07:42.860 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:42.860 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 905162 ']' 00:07:42.860 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.860 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.860 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.860 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.860 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.860 [2024-07-15 13:38:09.180071] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:42.860 [2024-07-15 13:38:09.180118] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.860 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.860 [2024-07-15 13:38:09.244889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.860 [2024-07-15 13:38:09.311687] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.860 [2024-07-15 13:38:09.311723] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.860 [2024-07-15 13:38:09.311730] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.860 [2024-07-15 13:38:09.311737] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.860 [2024-07-15 13:38:09.311743] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.860 [2024-07-15 13:38:09.311880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.860 [2024-07-15 13:38:09.311996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.860 [2024-07-15 13:38:09.312168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.860 [2024-07-15 13:38:09.312169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.431 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.431 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:43.431 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:43.431 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.431 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.693 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.693 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:43.693 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:43.693 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.693 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.693 [2024-07-15 13:38:10.000820] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.693 Malloc1 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.693 [2024-07-15 13:38:10.132458] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:43.693 { 00:07:43.693 "name": "Malloc1", 00:07:43.693 "aliases": [ 00:07:43.693 "3f2b7174-e6d3-4800-8706-af81279e2e9a" 00:07:43.693 ], 00:07:43.693 "product_name": "Malloc disk", 00:07:43.693 "block_size": 512, 00:07:43.693 "num_blocks": 1048576, 00:07:43.693 "uuid": "3f2b7174-e6d3-4800-8706-af81279e2e9a", 00:07:43.693 "assigned_rate_limits": { 00:07:43.693 "rw_ios_per_sec": 0, 00:07:43.693 "rw_mbytes_per_sec": 0, 00:07:43.693 "r_mbytes_per_sec": 0, 00:07:43.693 "w_mbytes_per_sec": 0 00:07:43.693 }, 00:07:43.693 "claimed": true, 00:07:43.693 "claim_type": "exclusive_write", 00:07:43.693 "zoned": false, 00:07:43.693 "supported_io_types": { 00:07:43.693 "read": true, 00:07:43.693 "write": true, 00:07:43.693 "unmap": true, 00:07:43.693 "flush": true, 00:07:43.693 "reset": true, 00:07:43.693 "nvme_admin": false, 00:07:43.693 "nvme_io": false, 00:07:43.693 "nvme_io_md": false, 00:07:43.693 "write_zeroes": true, 00:07:43.693 "zcopy": true, 00:07:43.693 "get_zone_info": false, 00:07:43.693 "zone_management": false, 00:07:43.693 "zone_append": false, 00:07:43.693 "compare": false, 00:07:43.693 "compare_and_write": false, 00:07:43.693 "abort": true, 00:07:43.693 "seek_hole": false, 00:07:43.693 "seek_data": false, 00:07:43.693 "copy": true, 00:07:43.693 "nvme_iov_md": false 00:07:43.693 }, 00:07:43.693 "memory_domains": [ 00:07:43.693 { 00:07:43.693 "dma_device_id": "system", 00:07:43.693 "dma_device_type": 1 00:07:43.693 }, 00:07:43.693 { 00:07:43.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.693 "dma_device_type": 2 00:07:43.693 } 00:07:43.693 ], 00:07:43.693 "driver_specific": {} 00:07:43.693 } 00:07:43.693 ]' 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:43.693 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:43.955 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:43.955 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:43.955 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:43.955 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:43.955 13:38:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:45.342 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:45.342 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:45.342 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:45.342 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:45.342 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:47.885 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:47.885 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:47.885 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:47.885 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:47.885 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:47.885 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:47.885 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:47.885 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:47.885 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:47.885 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:47.885 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:47.885 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:47.885 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:47.885 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:47.885 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:47.885 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:47.885 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:47.885 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:47.885 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:49.268 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:49.268 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:49.268 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:49.268 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.268 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.268 ************************************ 00:07:49.268 START TEST filesystem_in_capsule_ext4 00:07:49.268 ************************************ 00:07:49.268 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:49.268 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:49.268 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:49.268 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:49.268 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:49.268 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:49.268 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:49.268 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:49.268 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:49.268 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:49.268 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:49.268 mke2fs 1.46.5 (30-Dec-2021) 00:07:49.268 Discarding device blocks: 0/522240 done 00:07:49.268 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:49.268 Filesystem UUID: f62b9578-5aa9-4e99-af96-c555265faa9d 00:07:49.268 Superblock backups stored on blocks: 00:07:49.268 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:49.268 00:07:49.268 Allocating group tables: 0/64 done 00:07:49.268 Writing inode tables: 0/64 done 00:07:51.812 Creating journal (8192 blocks): done 00:07:52.383 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:07:52.383 00:07:52.383 13:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:52.383 13:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:52.646 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 905162 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:52.942 00:07:52.942 real 0m3.796s 00:07:52.942 user 0m0.028s 00:07:52.942 sys 0m0.072s 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:52.942 ************************************ 00:07:52.942 END TEST filesystem_in_capsule_ext4 00:07:52.942 ************************************ 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.942 ************************************ 00:07:52.942 START TEST filesystem_in_capsule_btrfs 00:07:52.942 ************************************ 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:52.942 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:53.203 btrfs-progs v6.6.2 00:07:53.203 See https://btrfs.readthedocs.io for more information. 00:07:53.203 00:07:53.203 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:53.203 NOTE: several default settings have changed in version 5.15, please make sure 00:07:53.203 this does not affect your deployments: 00:07:53.203 - DUP for metadata (-m dup) 00:07:53.203 - enabled no-holes (-O no-holes) 00:07:53.203 - enabled free-space-tree (-R free-space-tree) 00:07:53.203 00:07:53.203 Label: (null) 00:07:53.203 UUID: 69a8b252-e45e-4ca0-8fa1-add1beb39270 00:07:53.203 Node size: 16384 00:07:53.203 Sector size: 4096 00:07:53.203 Filesystem size: 510.00MiB 00:07:53.203 Block group profiles: 00:07:53.203 Data: single 8.00MiB 00:07:53.203 Metadata: DUP 32.00MiB 00:07:53.203 System: DUP 8.00MiB 00:07:53.203 SSD detected: yes 00:07:53.203 Zoned device: no 00:07:53.203 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:53.203 Runtime features: free-space-tree 00:07:53.203 Checksum: crc32c 00:07:53.203 Number of devices: 1 00:07:53.203 Devices: 00:07:53.203 ID SIZE PATH 00:07:53.203 1 510.00MiB /dev/nvme0n1p1 00:07:53.203 00:07:53.203 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:53.203 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:53.464 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:53.464 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:53.464 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:53.464 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:53.464 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:53.464 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:53.464 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 905162 00:07:53.464 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:53.465 00:07:53.465 real 0m0.522s 00:07:53.465 user 0m0.028s 00:07:53.465 sys 0m0.129s 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:53.465 ************************************ 00:07:53.465 END TEST filesystem_in_capsule_btrfs 00:07:53.465 ************************************ 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.465 ************************************ 00:07:53.465 START TEST filesystem_in_capsule_xfs 00:07:53.465 ************************************ 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:53.465 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:53.465 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:53.465 = sectsz=512 attr=2, projid32bit=1 00:07:53.465 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:53.465 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:53.465 data = bsize=4096 blocks=130560, imaxpct=25 00:07:53.465 = sunit=0 swidth=0 blks 00:07:53.465 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:53.465 log =internal log bsize=4096 blocks=16384, version=2 00:07:53.465 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:53.465 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:54.407 Discarding blocks...Done. 00:07:54.407 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:54.407 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:56.318 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:56.318 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:56.318 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:56.318 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:56.319 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:56.319 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:56.319 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 905162 00:07:56.319 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:56.319 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:56.319 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:56.319 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:56.319 00:07:56.319 real 0m2.705s 00:07:56.319 user 0m0.026s 00:07:56.319 sys 0m0.075s 00:07:56.319 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.319 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:56.319 ************************************ 00:07:56.319 END TEST filesystem_in_capsule_xfs 00:07:56.319 ************************************ 00:07:56.319 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:56.319 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:56.580 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:56.580 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:56.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.580 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:56.580 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:56.580 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:56.580 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.580 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:56.580 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.580 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:56.580 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:56.580 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.580 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.580 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.580 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:56.580 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 905162 00:07:56.580 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 905162 ']' 00:07:56.580 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 905162 00:07:56.580 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:56.580 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:56.580 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 905162 00:07:56.844 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:56.844 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:56.844 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 905162' 00:07:56.844 killing process with pid 905162 00:07:56.844 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 905162 00:07:56.844 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 905162 00:07:56.844 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:56.844 00:07:56.844 real 0m14.242s 00:07:56.844 user 0m56.239s 00:07:56.844 sys 0m1.221s 00:07:56.844 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.844 13:38:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.844 ************************************ 00:07:56.844 END TEST nvmf_filesystem_in_capsule 00:07:56.844 ************************************ 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:57.106 rmmod nvme_tcp 00:07:57.106 rmmod nvme_fabrics 00:07:57.106 rmmod nvme_keyring 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.106 13:38:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.022 13:38:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:59.022 00:07:59.022 real 0m36.599s 00:07:59.022 user 1m48.080s 00:07:59.022 sys 0m7.955s 00:07:59.022 13:38:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.022 13:38:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.022 ************************************ 00:07:59.022 END TEST nvmf_filesystem 00:07:59.022 ************************************ 00:07:59.284 13:38:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:59.284 13:38:25 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:59.284 13:38:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:59.284 13:38:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.284 13:38:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:59.284 ************************************ 00:07:59.284 START TEST nvmf_target_discovery 00:07:59.284 ************************************ 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:59.284 * Looking for test storage... 00:07:59.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:59.284 13:38:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.425 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.425 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:07.425 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:07.425 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:07.426 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:07.426 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:07.426 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:07.426 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:07.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:08:07.426 00:08:07.426 --- 10.0.0.2 ping statistics --- 00:08:07.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.426 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:08:07.426 00:08:07.426 --- 10.0.0.1 ping statistics --- 00:08:07.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.426 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=912321 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 912321 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 912321 ']' 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.426 13:38:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.426 [2024-07-15 13:38:32.968303] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:07.426 [2024-07-15 13:38:32.968365] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.427 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.427 [2024-07-15 13:38:33.038128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.427 [2024-07-15 13:38:33.113889] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.427 [2024-07-15 13:38:33.113926] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.427 [2024-07-15 13:38:33.113933] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.427 [2024-07-15 13:38:33.113940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.427 [2024-07-15 13:38:33.113946] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.427 [2024-07-15 13:38:33.114083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.427 [2024-07-15 13:38:33.114209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.427 [2024-07-15 13:38:33.114547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.427 [2024-07-15 13:38:33.114548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.427 [2024-07-15 13:38:33.799789] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.427 Null1 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.427 [2024-07-15 13:38:33.860084] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.427 Null2 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.427 Null3 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.427 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.689 Null4 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.689 13:38:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.689 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.689 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:07.689 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.689 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.689 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.689 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:07.689 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.689 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.689 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.689 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:08:07.689 00:08:07.689 Discovery Log Number of Records 6, Generation counter 6 00:08:07.689 =====Discovery Log Entry 0====== 00:08:07.689 trtype: tcp 00:08:07.689 adrfam: ipv4 00:08:07.689 subtype: current discovery subsystem 00:08:07.689 treq: not required 00:08:07.689 portid: 0 00:08:07.689 trsvcid: 4420 00:08:07.689 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:07.689 traddr: 10.0.0.2 00:08:07.689 eflags: explicit discovery connections, duplicate discovery information 00:08:07.689 sectype: none 00:08:07.689 =====Discovery Log Entry 1====== 00:08:07.689 trtype: tcp 00:08:07.689 adrfam: ipv4 00:08:07.689 subtype: nvme subsystem 00:08:07.689 treq: not required 00:08:07.689 portid: 0 00:08:07.689 trsvcid: 4420 00:08:07.689 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:07.689 traddr: 10.0.0.2 00:08:07.689 eflags: none 00:08:07.689 sectype: none 00:08:07.689 =====Discovery Log Entry 2====== 00:08:07.689 trtype: tcp 00:08:07.689 adrfam: ipv4 00:08:07.689 subtype: nvme subsystem 00:08:07.689 treq: not required 00:08:07.689 portid: 0 00:08:07.689 trsvcid: 4420 00:08:07.689 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:07.689 traddr: 10.0.0.2 00:08:07.689 eflags: none 00:08:07.689 sectype: none 00:08:07.689 =====Discovery Log Entry 3====== 00:08:07.689 trtype: tcp 00:08:07.689 adrfam: ipv4 00:08:07.689 subtype: nvme subsystem 00:08:07.689 treq: not required 00:08:07.689 portid: 0 00:08:07.689 trsvcid: 4420 00:08:07.689 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:07.689 traddr: 10.0.0.2 00:08:07.689 eflags: none 00:08:07.689 sectype: none 00:08:07.689 =====Discovery Log Entry 4====== 00:08:07.689 trtype: tcp 00:08:07.689 adrfam: ipv4 00:08:07.689 subtype: nvme subsystem 00:08:07.689 treq: not required 00:08:07.689 portid: 0 00:08:07.689 trsvcid: 4420 00:08:07.689 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:07.689 traddr: 10.0.0.2 00:08:07.689 eflags: none 00:08:07.689 sectype: none 00:08:07.689 =====Discovery Log Entry 5====== 00:08:07.689 trtype: tcp 00:08:07.689 adrfam: ipv4 00:08:07.689 subtype: discovery subsystem referral 00:08:07.689 treq: not required 00:08:07.689 portid: 0 00:08:07.689 trsvcid: 4430 00:08:07.689 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:07.689 traddr: 10.0.0.2 00:08:07.689 eflags: none 00:08:07.689 sectype: none 00:08:07.689 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:07.689 Perform nvmf subsystem discovery via RPC 00:08:07.689 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:07.689 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.689 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.951 [ 00:08:07.951 { 00:08:07.951 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:07.951 "subtype": "Discovery", 00:08:07.951 "listen_addresses": [ 00:08:07.951 { 00:08:07.951 "trtype": "TCP", 00:08:07.951 "adrfam": "IPv4", 00:08:07.951 "traddr": "10.0.0.2", 00:08:07.951 "trsvcid": "4420" 00:08:07.951 } 00:08:07.951 ], 00:08:07.951 "allow_any_host": true, 00:08:07.951 "hosts": [] 00:08:07.951 }, 00:08:07.951 { 00:08:07.951 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.951 "subtype": "NVMe", 00:08:07.951 "listen_addresses": [ 00:08:07.951 { 00:08:07.951 "trtype": "TCP", 00:08:07.951 "adrfam": "IPv4", 00:08:07.951 "traddr": "10.0.0.2", 00:08:07.951 "trsvcid": "4420" 00:08:07.951 } 00:08:07.951 ], 00:08:07.951 "allow_any_host": true, 00:08:07.951 "hosts": [], 00:08:07.951 "serial_number": "SPDK00000000000001", 00:08:07.951 "model_number": "SPDK bdev Controller", 00:08:07.951 "max_namespaces": 32, 00:08:07.951 "min_cntlid": 1, 00:08:07.951 "max_cntlid": 65519, 00:08:07.951 "namespaces": [ 00:08:07.951 { 00:08:07.951 "nsid": 1, 00:08:07.951 "bdev_name": "Null1", 00:08:07.951 "name": "Null1", 00:08:07.951 "nguid": "8AE10782F7CA4C9486CB6BCD66651CD9", 00:08:07.951 "uuid": "8ae10782-f7ca-4c94-86cb-6bcd66651cd9" 00:08:07.951 } 00:08:07.951 ] 00:08:07.951 }, 00:08:07.951 { 00:08:07.951 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:07.951 "subtype": "NVMe", 00:08:07.951 "listen_addresses": [ 00:08:07.951 { 00:08:07.951 "trtype": "TCP", 00:08:07.951 "adrfam": "IPv4", 00:08:07.951 "traddr": "10.0.0.2", 00:08:07.951 "trsvcid": "4420" 00:08:07.951 } 00:08:07.951 ], 00:08:07.951 "allow_any_host": true, 00:08:07.951 "hosts": [], 00:08:07.951 "serial_number": "SPDK00000000000002", 00:08:07.951 "model_number": "SPDK bdev Controller", 00:08:07.951 "max_namespaces": 32, 00:08:07.951 "min_cntlid": 1, 00:08:07.951 "max_cntlid": 65519, 00:08:07.951 "namespaces": [ 00:08:07.951 { 00:08:07.951 "nsid": 1, 00:08:07.951 "bdev_name": "Null2", 00:08:07.951 "name": "Null2", 00:08:07.951 "nguid": "9FBE21F1DF5247E5BE800A39F2A87650", 00:08:07.951 "uuid": "9fbe21f1-df52-47e5-be80-0a39f2a87650" 00:08:07.951 } 00:08:07.951 ] 00:08:07.951 }, 00:08:07.951 { 00:08:07.951 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:07.951 "subtype": "NVMe", 00:08:07.951 "listen_addresses": [ 00:08:07.951 { 00:08:07.951 "trtype": "TCP", 00:08:07.951 "adrfam": "IPv4", 00:08:07.951 "traddr": "10.0.0.2", 00:08:07.951 "trsvcid": "4420" 00:08:07.951 } 00:08:07.951 ], 00:08:07.951 "allow_any_host": true, 00:08:07.951 "hosts": [], 00:08:07.951 "serial_number": "SPDK00000000000003", 00:08:07.951 "model_number": "SPDK bdev Controller", 00:08:07.951 "max_namespaces": 32, 00:08:07.951 "min_cntlid": 1, 00:08:07.951 "max_cntlid": 65519, 00:08:07.951 "namespaces": [ 00:08:07.951 { 00:08:07.951 "nsid": 1, 00:08:07.951 "bdev_name": "Null3", 00:08:07.951 "name": "Null3", 00:08:07.951 "nguid": "1FDA42CF507442D18EB11E50C525C759", 00:08:07.951 "uuid": "1fda42cf-5074-42d1-8eb1-1e50c525c759" 00:08:07.951 } 00:08:07.951 ] 00:08:07.951 }, 00:08:07.951 { 00:08:07.951 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:07.951 "subtype": "NVMe", 00:08:07.951 "listen_addresses": [ 00:08:07.951 { 00:08:07.951 "trtype": "TCP", 00:08:07.951 "adrfam": "IPv4", 00:08:07.951 "traddr": "10.0.0.2", 00:08:07.951 "trsvcid": "4420" 00:08:07.951 } 00:08:07.951 ], 00:08:07.951 "allow_any_host": true, 00:08:07.951 "hosts": [], 00:08:07.951 "serial_number": "SPDK00000000000004", 00:08:07.951 "model_number": "SPDK bdev Controller", 00:08:07.951 "max_namespaces": 32, 00:08:07.951 "min_cntlid": 1, 00:08:07.951 "max_cntlid": 65519, 00:08:07.951 "namespaces": [ 00:08:07.951 { 00:08:07.951 "nsid": 1, 00:08:07.951 "bdev_name": "Null4", 00:08:07.951 "name": "Null4", 00:08:07.951 "nguid": "4018CD4F947D42508F380229978D5F7C", 00:08:07.951 "uuid": "4018cd4f-947d-4250-8f38-0229978d5f7c" 00:08:07.951 } 00:08:07.951 ] 00:08:07.951 } 00:08:07.951 ] 00:08:07.951 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.951 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:07.951 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.951 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.951 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.951 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.951 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.951 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:07.951 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.951 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.951 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.951 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.951 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:07.951 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:07.952 rmmod nvme_tcp 00:08:07.952 rmmod nvme_fabrics 00:08:07.952 rmmod nvme_keyring 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 912321 ']' 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 912321 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 912321 ']' 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 912321 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:07.952 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 912321 00:08:08.213 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:08.213 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:08.213 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 912321' 00:08:08.213 killing process with pid 912321 00:08:08.213 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 912321 00:08:08.213 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 912321 00:08:08.213 13:38:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.213 13:38:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:08.213 13:38:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:08.213 13:38:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.213 13:38:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.213 13:38:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.213 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.213 13:38:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.758 13:38:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:10.758 00:08:10.758 real 0m11.076s 00:08:10.758 user 0m8.262s 00:08:10.758 sys 0m5.671s 00:08:10.758 13:38:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.758 13:38:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.758 ************************************ 00:08:10.758 END TEST nvmf_target_discovery 00:08:10.758 ************************************ 00:08:10.758 13:38:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:10.758 13:38:36 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:10.758 13:38:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:10.758 13:38:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.758 13:38:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:10.758 ************************************ 00:08:10.758 START TEST nvmf_referrals 00:08:10.758 ************************************ 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:10.758 * Looking for test storage... 00:08:10.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:10.758 13:38:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:17.346 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.346 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:17.347 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:17.347 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:17.347 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:17.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:08:17.347 00:08:17.347 --- 10.0.0.2 ping statistics --- 00:08:17.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.347 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:08:17.347 00:08:17.347 --- 10.0.0.1 ping statistics --- 00:08:17.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.347 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=916761 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 916761 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 916761 ']' 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.347 13:38:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.347 [2024-07-15 13:38:43.771733] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:17.347 [2024-07-15 13:38:43.771802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.347 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.347 [2024-07-15 13:38:43.844890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.607 [2024-07-15 13:38:43.920211] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.607 [2024-07-15 13:38:43.920251] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.607 [2024-07-15 13:38:43.920259] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.607 [2024-07-15 13:38:43.920265] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.607 [2024-07-15 13:38:43.920271] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.607 [2024-07-15 13:38:43.920416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.607 [2024-07-15 13:38:43.920532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.607 [2024-07-15 13:38:43.920694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.607 [2024-07-15 13:38:43.920695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.177 [2024-07-15 13:38:44.593801] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.177 [2024-07-15 13:38:44.610003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:18.177 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:18.438 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:18.438 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:18.438 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:18.438 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.438 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.438 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:18.438 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.438 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:18.438 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:18.438 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:18.438 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:18.438 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:18.439 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.439 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:18.439 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:18.699 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:18.699 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:18.699 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:18.699 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.699 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.699 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.699 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:18.699 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.699 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.700 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.700 13:38:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:18.700 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.700 13:38:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:18.700 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:18.960 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.960 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:18.960 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:18.960 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:18.960 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:18.960 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:18.960 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:18.960 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:18.960 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.960 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:18.960 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:18.960 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:18.960 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:18.960 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:18.960 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.960 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:19.221 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:19.482 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:19.482 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:19.482 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:19.482 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:19.482 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:19.482 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:19.482 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:19.482 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:19.482 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:19.482 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:19.482 13:38:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:19.743 13:38:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:19.743 rmmod nvme_tcp 00:08:19.743 rmmod nvme_fabrics 00:08:20.055 rmmod nvme_keyring 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 916761 ']' 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 916761 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 916761 ']' 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 916761 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 916761 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 916761' 00:08:20.055 killing process with pid 916761 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 916761 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 916761 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.055 13:38:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.049 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:22.049 00:08:22.049 real 0m11.787s 00:08:22.049 user 0m13.122s 00:08:22.049 sys 0m5.708s 00:08:22.049 13:38:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.310 13:38:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.310 ************************************ 00:08:22.310 END TEST nvmf_referrals 00:08:22.310 ************************************ 00:08:22.310 13:38:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:22.310 13:38:48 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:22.310 13:38:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:22.310 13:38:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.310 13:38:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:22.310 ************************************ 00:08:22.310 START TEST nvmf_connect_disconnect 00:08:22.310 ************************************ 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:22.310 * Looking for test storage... 00:08:22.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:22.310 13:38:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.453 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.453 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:30.453 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:30.453 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:30.453 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:30.453 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:30.454 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:30.454 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:30.454 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:30.454 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:30.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:08:30.454 00:08:30.454 --- 10.0.0.2 ping statistics --- 00:08:30.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.454 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:08:30.454 00:08:30.454 --- 10.0.0.1 ping statistics --- 00:08:30.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.454 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=921533 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 921533 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 921533 ']' 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.454 13:38:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.454 [2024-07-15 13:38:55.937813] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:30.454 [2024-07-15 13:38:55.937876] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.454 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.455 [2024-07-15 13:38:56.007748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.455 [2024-07-15 13:38:56.082563] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.455 [2024-07-15 13:38:56.082601] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.455 [2024-07-15 13:38:56.082612] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.455 [2024-07-15 13:38:56.082618] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.455 [2024-07-15 13:38:56.082624] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.455 [2024-07-15 13:38:56.082774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.455 [2024-07-15 13:38:56.082884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.455 [2024-07-15 13:38:56.083049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.455 [2024-07-15 13:38:56.083050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.455 [2024-07-15 13:38:56.759790] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.455 [2024-07-15 13:38:56.819156] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:30.455 13:38:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:34.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:48.750 rmmod nvme_tcp 00:08:48.750 rmmod nvme_fabrics 00:08:48.750 rmmod nvme_keyring 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 921533 ']' 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 921533 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 921533 ']' 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 921533 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 921533 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 921533' 00:08:48.750 killing process with pid 921533 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 921533 00:08:48.750 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 921533 00:08:49.011 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:49.011 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:49.011 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:49.011 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:49.011 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:49.011 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.011 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.011 13:39:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.925 13:39:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:50.925 00:08:50.925 real 0m28.748s 00:08:50.925 user 1m18.722s 00:08:50.925 sys 0m6.458s 00:08:50.925 13:39:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:50.925 13:39:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:50.925 ************************************ 00:08:50.925 END TEST nvmf_connect_disconnect 00:08:50.925 ************************************ 00:08:50.925 13:39:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:50.925 13:39:17 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:50.925 13:39:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:50.925 13:39:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.925 13:39:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:51.185 ************************************ 00:08:51.185 START TEST nvmf_multitarget 00:08:51.185 ************************************ 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:51.185 * Looking for test storage... 00:08:51.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.185 13:39:17 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:51.186 13:39:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:59.376 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.376 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:59.376 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:59.376 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:59.376 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:59.376 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:59.376 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:59.376 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:59.376 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:59.376 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:59.376 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:59.376 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:59.376 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:59.376 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:59.376 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:59.376 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:59.377 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:59.377 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:59.377 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:59.377 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:59.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:08:59.377 00:08:59.377 --- 10.0.0.2 ping statistics --- 00:08:59.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.377 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.430 ms 00:08:59.377 00:08:59.377 --- 10.0.0.1 ping statistics --- 00:08:59.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.377 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=930204 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 930204 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 930204 ']' 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:59.377 13:39:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:59.377 [2024-07-15 13:39:24.823939] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:59.377 [2024-07-15 13:39:24.824010] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.377 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.377 [2024-07-15 13:39:24.894041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.377 [2024-07-15 13:39:24.971341] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.377 [2024-07-15 13:39:24.971380] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.377 [2024-07-15 13:39:24.971388] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.377 [2024-07-15 13:39:24.971395] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.377 [2024-07-15 13:39:24.971400] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.377 [2024-07-15 13:39:24.971542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.378 [2024-07-15 13:39:24.971667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.378 [2024-07-15 13:39:24.971830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.378 [2024-07-15 13:39:24.971832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.378 13:39:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:59.378 13:39:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:59.378 13:39:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:59.378 13:39:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:59.378 13:39:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:59.378 13:39:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.378 13:39:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:59.378 13:39:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:59.378 13:39:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:59.378 13:39:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:59.378 13:39:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:59.378 "nvmf_tgt_1" 00:08:59.378 13:39:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:59.637 "nvmf_tgt_2" 00:08:59.638 13:39:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:59.638 13:39:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:59.638 13:39:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:59.638 13:39:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:59.638 true 00:08:59.638 13:39:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:59.898 true 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:59.898 rmmod nvme_tcp 00:08:59.898 rmmod nvme_fabrics 00:08:59.898 rmmod nvme_keyring 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 930204 ']' 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 930204 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 930204 ']' 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 930204 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:59.898 13:39:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 930204 00:09:00.158 13:39:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:00.158 13:39:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:00.158 13:39:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 930204' 00:09:00.158 killing process with pid 930204 00:09:00.158 13:39:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 930204 00:09:00.158 13:39:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 930204 00:09:00.158 13:39:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:00.158 13:39:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:00.158 13:39:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:00.158 13:39:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:00.158 13:39:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:00.158 13:39:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.158 13:39:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.158 13:39:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.701 13:39:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:02.701 00:09:02.701 real 0m11.187s 00:09:02.701 user 0m9.366s 00:09:02.701 sys 0m5.642s 00:09:02.701 13:39:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.702 13:39:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:02.702 ************************************ 00:09:02.702 END TEST nvmf_multitarget 00:09:02.702 ************************************ 00:09:02.702 13:39:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:02.702 13:39:28 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:02.702 13:39:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:02.702 13:39:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.702 13:39:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:02.702 ************************************ 00:09:02.702 START TEST nvmf_rpc 00:09:02.702 ************************************ 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:02.702 * Looking for test storage... 00:09:02.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:02.702 13:39:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:09.300 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:09.300 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.300 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:09.301 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:09.301 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:09.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:09:09.301 00:09:09.301 --- 10.0.0.2 ping statistics --- 00:09:09.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.301 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:09:09.301 00:09:09.301 --- 10.0.0.1 ping statistics --- 00:09:09.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.301 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=934626 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 934626 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 934626 ']' 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:09.301 13:39:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.301 [2024-07-15 13:39:35.727702] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:09.301 [2024-07-15 13:39:35.727755] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.301 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.301 [2024-07-15 13:39:35.793198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.562 [2024-07-15 13:39:35.859282] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.562 [2024-07-15 13:39:35.859319] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.562 [2024-07-15 13:39:35.859326] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.562 [2024-07-15 13:39:35.859333] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.562 [2024-07-15 13:39:35.859338] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.562 [2024-07-15 13:39:35.859476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.562 [2024-07-15 13:39:35.859609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.562 [2024-07-15 13:39:35.859769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.562 [2024-07-15 13:39:35.859769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:10.133 "tick_rate": 2400000000, 00:09:10.133 "poll_groups": [ 00:09:10.133 { 00:09:10.133 "name": "nvmf_tgt_poll_group_000", 00:09:10.133 "admin_qpairs": 0, 00:09:10.133 "io_qpairs": 0, 00:09:10.133 "current_admin_qpairs": 0, 00:09:10.133 "current_io_qpairs": 0, 00:09:10.133 "pending_bdev_io": 0, 00:09:10.133 "completed_nvme_io": 0, 00:09:10.133 "transports": [] 00:09:10.133 }, 00:09:10.133 { 00:09:10.133 "name": "nvmf_tgt_poll_group_001", 00:09:10.133 "admin_qpairs": 0, 00:09:10.133 "io_qpairs": 0, 00:09:10.133 "current_admin_qpairs": 0, 00:09:10.133 "current_io_qpairs": 0, 00:09:10.133 "pending_bdev_io": 0, 00:09:10.133 "completed_nvme_io": 0, 00:09:10.133 "transports": [] 00:09:10.133 }, 00:09:10.133 { 00:09:10.133 "name": "nvmf_tgt_poll_group_002", 00:09:10.133 "admin_qpairs": 0, 00:09:10.133 "io_qpairs": 0, 00:09:10.133 "current_admin_qpairs": 0, 00:09:10.133 "current_io_qpairs": 0, 00:09:10.133 "pending_bdev_io": 0, 00:09:10.133 "completed_nvme_io": 0, 00:09:10.133 "transports": [] 00:09:10.133 }, 00:09:10.133 { 00:09:10.133 "name": "nvmf_tgt_poll_group_003", 00:09:10.133 "admin_qpairs": 0, 00:09:10.133 "io_qpairs": 0, 00:09:10.133 "current_admin_qpairs": 0, 00:09:10.133 "current_io_qpairs": 0, 00:09:10.133 "pending_bdev_io": 0, 00:09:10.133 "completed_nvme_io": 0, 00:09:10.133 "transports": [] 00:09:10.133 } 00:09:10.133 ] 00:09:10.133 }' 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.133 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.133 [2024-07-15 13:39:36.657171] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:10.408 "tick_rate": 2400000000, 00:09:10.408 "poll_groups": [ 00:09:10.408 { 00:09:10.408 "name": "nvmf_tgt_poll_group_000", 00:09:10.408 "admin_qpairs": 0, 00:09:10.408 "io_qpairs": 0, 00:09:10.408 "current_admin_qpairs": 0, 00:09:10.408 "current_io_qpairs": 0, 00:09:10.408 "pending_bdev_io": 0, 00:09:10.408 "completed_nvme_io": 0, 00:09:10.408 "transports": [ 00:09:10.408 { 00:09:10.408 "trtype": "TCP" 00:09:10.408 } 00:09:10.408 ] 00:09:10.408 }, 00:09:10.408 { 00:09:10.408 "name": "nvmf_tgt_poll_group_001", 00:09:10.408 "admin_qpairs": 0, 00:09:10.408 "io_qpairs": 0, 00:09:10.408 "current_admin_qpairs": 0, 00:09:10.408 "current_io_qpairs": 0, 00:09:10.408 "pending_bdev_io": 0, 00:09:10.408 "completed_nvme_io": 0, 00:09:10.408 "transports": [ 00:09:10.408 { 00:09:10.408 "trtype": "TCP" 00:09:10.408 } 00:09:10.408 ] 00:09:10.408 }, 00:09:10.408 { 00:09:10.408 "name": "nvmf_tgt_poll_group_002", 00:09:10.408 "admin_qpairs": 0, 00:09:10.408 "io_qpairs": 0, 00:09:10.408 "current_admin_qpairs": 0, 00:09:10.408 "current_io_qpairs": 0, 00:09:10.408 "pending_bdev_io": 0, 00:09:10.408 "completed_nvme_io": 0, 00:09:10.408 "transports": [ 00:09:10.408 { 00:09:10.408 "trtype": "TCP" 00:09:10.408 } 00:09:10.408 ] 00:09:10.408 }, 00:09:10.408 { 00:09:10.408 "name": "nvmf_tgt_poll_group_003", 00:09:10.408 "admin_qpairs": 0, 00:09:10.408 "io_qpairs": 0, 00:09:10.408 "current_admin_qpairs": 0, 00:09:10.408 "current_io_qpairs": 0, 00:09:10.408 "pending_bdev_io": 0, 00:09:10.408 "completed_nvme_io": 0, 00:09:10.408 "transports": [ 00:09:10.408 { 00:09:10.408 "trtype": "TCP" 00:09:10.408 } 00:09:10.408 ] 00:09:10.408 } 00:09:10.408 ] 00:09:10.408 }' 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.408 Malloc1 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.408 [2024-07-15 13:39:36.848857] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:10.408 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:10.408 [2024-07-15 13:39:36.875659] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:10.408 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:10.408 could not add new controller: failed to write to nvme-fabrics device 00:09:10.409 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:10.409 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:10.409 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:10.409 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:10.409 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:10.409 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.409 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.409 13:39:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.409 13:39:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:12.321 13:39:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:12.321 13:39:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:12.321 13:39:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:12.321 13:39:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:12.321 13:39:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.233 [2024-07-15 13:39:40.599578] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:14.233 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:14.233 could not add new controller: failed to write to nvme-fabrics device 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.233 13:39:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:16.178 13:39:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:16.178 13:39:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:16.178 13:39:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:16.178 13:39:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:16.178 13:39:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:18.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.088 [2024-07-15 13:39:44.347308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.088 13:39:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:19.471 13:39:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:19.471 13:39:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:19.471 13:39:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:19.471 13:39:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:19.471 13:39:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:21.407 13:39:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:21.407 13:39:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:21.407 13:39:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:21.407 13:39:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:21.407 13:39:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:21.407 13:39:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:21.407 13:39:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:21.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.668 13:39:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:21.668 13:39:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:21.668 13:39:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:21.668 13:39:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.668 13:39:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:21.668 13:39:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.668 [2024-07-15 13:39:48.054831] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.668 13:39:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:23.047 13:39:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:23.047 13:39:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:23.047 13:39:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:23.047 13:39:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:23.047 13:39:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:25.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.582 [2024-07-15 13:39:51.815661] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.582 13:39:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:26.964 13:39:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:26.964 13:39:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:26.964 13:39:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:26.964 13:39:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:26.964 13:39:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:29.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.501 [2024-07-15 13:39:55.571531] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.501 13:39:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:30.877 13:39:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:30.878 13:39:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:30.878 13:39:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:30.878 13:39:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:30.878 13:39:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:32.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.788 [2024-07-15 13:39:59.273872] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.788 13:39:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:34.699 13:40:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:34.699 13:40:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:34.699 13:40:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:34.699 13:40:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:34.699 13:40:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:36.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 [2024-07-15 13:40:02.989743] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 13:40:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 [2024-07-15 13:40:03.049878] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.609 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.610 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.610 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.610 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.610 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.610 [2024-07-15 13:40:03.118083] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.610 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.610 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:36.610 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.610 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.872 [2024-07-15 13:40:03.178275] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.872 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.873 [2024-07-15 13:40:03.238453] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:36.873 "tick_rate": 2400000000, 00:09:36.873 "poll_groups": [ 00:09:36.873 { 00:09:36.873 "name": "nvmf_tgt_poll_group_000", 00:09:36.873 "admin_qpairs": 0, 00:09:36.873 "io_qpairs": 224, 00:09:36.873 "current_admin_qpairs": 0, 00:09:36.873 "current_io_qpairs": 0, 00:09:36.873 "pending_bdev_io": 0, 00:09:36.873 "completed_nvme_io": 274, 00:09:36.873 "transports": [ 00:09:36.873 { 00:09:36.873 "trtype": "TCP" 00:09:36.873 } 00:09:36.873 ] 00:09:36.873 }, 00:09:36.873 { 00:09:36.873 "name": "nvmf_tgt_poll_group_001", 00:09:36.873 "admin_qpairs": 1, 00:09:36.873 "io_qpairs": 223, 00:09:36.873 "current_admin_qpairs": 0, 00:09:36.873 "current_io_qpairs": 0, 00:09:36.873 "pending_bdev_io": 0, 00:09:36.873 "completed_nvme_io": 517, 00:09:36.873 "transports": [ 00:09:36.873 { 00:09:36.873 "trtype": "TCP" 00:09:36.873 } 00:09:36.873 ] 00:09:36.873 }, 00:09:36.873 { 00:09:36.873 "name": "nvmf_tgt_poll_group_002", 00:09:36.873 "admin_qpairs": 6, 00:09:36.873 "io_qpairs": 218, 00:09:36.873 "current_admin_qpairs": 0, 00:09:36.873 "current_io_qpairs": 0, 00:09:36.873 "pending_bdev_io": 0, 00:09:36.873 "completed_nvme_io": 220, 00:09:36.873 "transports": [ 00:09:36.873 { 00:09:36.873 "trtype": "TCP" 00:09:36.873 } 00:09:36.873 ] 00:09:36.873 }, 00:09:36.873 { 00:09:36.873 "name": "nvmf_tgt_poll_group_003", 00:09:36.873 "admin_qpairs": 0, 00:09:36.873 "io_qpairs": 224, 00:09:36.873 "current_admin_qpairs": 0, 00:09:36.873 "current_io_qpairs": 0, 00:09:36.873 "pending_bdev_io": 0, 00:09:36.873 "completed_nvme_io": 228, 00:09:36.873 "transports": [ 00:09:36.873 { 00:09:36.873 "trtype": "TCP" 00:09:36.873 } 00:09:36.873 ] 00:09:36.873 } 00:09:36.873 ] 00:09:36.873 }' 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:36.873 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:37.135 rmmod nvme_tcp 00:09:37.135 rmmod nvme_fabrics 00:09:37.135 rmmod nvme_keyring 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 934626 ']' 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 934626 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 934626 ']' 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 934626 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 934626 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 934626' 00:09:37.135 killing process with pid 934626 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 934626 00:09:37.135 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 934626 00:09:37.396 13:40:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:37.396 13:40:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:37.396 13:40:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:37.396 13:40:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:37.396 13:40:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:37.396 13:40:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.396 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:37.396 13:40:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.310 13:40:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:39.310 00:09:39.310 real 0m36.983s 00:09:39.310 user 1m52.965s 00:09:39.310 sys 0m6.920s 00:09:39.310 13:40:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:39.310 13:40:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.310 ************************************ 00:09:39.310 END TEST nvmf_rpc 00:09:39.310 ************************************ 00:09:39.310 13:40:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:39.310 13:40:05 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:39.310 13:40:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:39.310 13:40:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.310 13:40:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:39.310 ************************************ 00:09:39.310 START TEST nvmf_invalid 00:09:39.310 ************************************ 00:09:39.310 13:40:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:39.571 * Looking for test storage... 00:09:39.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:39.571 13:40:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.572 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:39.572 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:39.572 13:40:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:39.572 13:40:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:47.712 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.712 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:47.713 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:47.713 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:47.713 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:47.713 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.713 13:40:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:47.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:09:47.713 00:09:47.713 --- 10.0.0.2 ping statistics --- 00:09:47.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.713 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:09:47.713 00:09:47.713 --- 10.0.0.1 ping statistics --- 00:09:47.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.713 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=944486 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 944486 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 944486 ']' 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.713 13:40:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:47.713 [2024-07-15 13:40:13.147528] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:47.713 [2024-07-15 13:40:13.147585] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.713 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.713 [2024-07-15 13:40:13.214297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.713 [2024-07-15 13:40:13.283614] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.713 [2024-07-15 13:40:13.283650] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.713 [2024-07-15 13:40:13.283658] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.714 [2024-07-15 13:40:13.283665] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.714 [2024-07-15 13:40:13.283670] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.714 [2024-07-15 13:40:13.283810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.714 [2024-07-15 13:40:13.283923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.714 [2024-07-15 13:40:13.284081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.714 [2024-07-15 13:40:13.284081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.714 13:40:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.714 13:40:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:47.714 13:40:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:47.714 13:40:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:47.714 13:40:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:47.714 13:40:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.714 13:40:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:47.714 13:40:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28917 00:09:47.714 [2024-07-15 13:40:14.091117] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:47.714 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:47.714 { 00:09:47.714 "nqn": "nqn.2016-06.io.spdk:cnode28917", 00:09:47.714 "tgt_name": "foobar", 00:09:47.714 "method": "nvmf_create_subsystem", 00:09:47.714 "req_id": 1 00:09:47.714 } 00:09:47.714 Got JSON-RPC error response 00:09:47.714 response: 00:09:47.714 { 00:09:47.714 "code": -32603, 00:09:47.714 "message": "Unable to find target foobar" 00:09:47.714 }' 00:09:47.714 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:47.714 { 00:09:47.714 "nqn": "nqn.2016-06.io.spdk:cnode28917", 00:09:47.714 "tgt_name": "foobar", 00:09:47.714 "method": "nvmf_create_subsystem", 00:09:47.714 "req_id": 1 00:09:47.714 } 00:09:47.714 Got JSON-RPC error response 00:09:47.714 response: 00:09:47.714 { 00:09:47.714 "code": -32603, 00:09:47.714 "message": "Unable to find target foobar" 00:09:47.714 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:47.714 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:47.714 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11667 00:09:47.974 [2024-07-15 13:40:14.267731] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11667: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:47.974 { 00:09:47.974 "nqn": "nqn.2016-06.io.spdk:cnode11667", 00:09:47.974 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:47.974 "method": "nvmf_create_subsystem", 00:09:47.974 "req_id": 1 00:09:47.974 } 00:09:47.974 Got JSON-RPC error response 00:09:47.974 response: 00:09:47.974 { 00:09:47.974 "code": -32602, 00:09:47.974 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:47.974 }' 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:47.974 { 00:09:47.974 "nqn": "nqn.2016-06.io.spdk:cnode11667", 00:09:47.974 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:47.974 "method": "nvmf_create_subsystem", 00:09:47.974 "req_id": 1 00:09:47.974 } 00:09:47.974 Got JSON-RPC error response 00:09:47.974 response: 00:09:47.974 { 00:09:47.974 "code": -32602, 00:09:47.974 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:47.974 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2975 00:09:47.974 [2024-07-15 13:40:14.444333] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2975: invalid model number 'SPDK_Controller' 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:47.974 { 00:09:47.974 "nqn": "nqn.2016-06.io.spdk:cnode2975", 00:09:47.974 "model_number": "SPDK_Controller\u001f", 00:09:47.974 "method": "nvmf_create_subsystem", 00:09:47.974 "req_id": 1 00:09:47.974 } 00:09:47.974 Got JSON-RPC error response 00:09:47.974 response: 00:09:47.974 { 00:09:47.974 "code": -32602, 00:09:47.974 "message": "Invalid MN SPDK_Controller\u001f" 00:09:47.974 }' 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:47.974 { 00:09:47.974 "nqn": "nqn.2016-06.io.spdk:cnode2975", 00:09:47.974 "model_number": "SPDK_Controller\u001f", 00:09:47.974 "method": "nvmf_create_subsystem", 00:09:47.974 "req_id": 1 00:09:47.974 } 00:09:47.974 Got JSON-RPC error response 00:09:47.974 response: 00:09:47.974 { 00:09:47.974 "code": -32602, 00:09:47.974 "message": "Invalid MN SPDK_Controller\u001f" 00:09:47.974 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:47.974 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ I == \- ]] 00:09:48.235 13:40:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'IC5w&kuI,DaYpSb#dP2S$*Bg' 00:09:48.810 13:40:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'a;39ahjXQ8_EWbM|eyBx?{{Ut=Kv/?D/>P2S$*Bg' nqn.2016-06.io.spdk:cnode14728 00:09:48.810 [2024-07-15 13:40:15.262876] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14728: invalid model number 'a;39ahjXQ8_EWbM|eyBx?{{Ut=Kv/?D/>P2S$*Bg' 00:09:48.810 13:40:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:48.810 { 00:09:48.810 "nqn": "nqn.2016-06.io.spdk:cnode14728", 00:09:48.810 "model_number": "a;39ahjXQ8_EWbM|eyBx?{{Ut=Kv/\u007f?D/>P2S$*Bg", 00:09:48.810 "method": "nvmf_create_subsystem", 00:09:48.810 "req_id": 1 00:09:48.810 } 00:09:48.810 Got JSON-RPC error response 00:09:48.810 response: 00:09:48.810 { 00:09:48.810 "code": -32602, 00:09:48.810 "message": "Invalid MN a;39ahjXQ8_EWbM|eyBx?{{Ut=Kv/\u007f?D/>P2S$*Bg" 00:09:48.810 }' 00:09:48.810 13:40:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:48.810 { 00:09:48.810 "nqn": "nqn.2016-06.io.spdk:cnode14728", 00:09:48.810 "model_number": "a;39ahjXQ8_EWbM|eyBx?{{Ut=Kv/\u007f?D/>P2S$*Bg", 00:09:48.810 "method": "nvmf_create_subsystem", 00:09:48.810 "req_id": 1 00:09:48.810 } 00:09:48.810 Got JSON-RPC error response 00:09:48.810 response: 00:09:48.810 { 00:09:48.810 "code": -32602, 00:09:48.810 "message": "Invalid MN a;39ahjXQ8_EWbM|eyBx?{{Ut=Kv/\u007f?D/>P2S$*Bg" 00:09:48.810 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:48.810 13:40:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:49.070 [2024-07-15 13:40:15.435509] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.070 13:40:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:49.363 13:40:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:49.363 13:40:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:49.363 13:40:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:49.363 13:40:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:49.363 13:40:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:49.363 [2024-07-15 13:40:15.786076] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:49.363 13:40:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:49.363 { 00:09:49.363 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:49.363 "listen_address": { 00:09:49.363 "trtype": "tcp", 00:09:49.363 "traddr": "", 00:09:49.363 "trsvcid": "4421" 00:09:49.363 }, 00:09:49.363 "method": "nvmf_subsystem_remove_listener", 00:09:49.363 "req_id": 1 00:09:49.363 } 00:09:49.363 Got JSON-RPC error response 00:09:49.363 response: 00:09:49.363 { 00:09:49.363 "code": -32602, 00:09:49.363 "message": "Invalid parameters" 00:09:49.363 }' 00:09:49.363 13:40:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:49.363 { 00:09:49.363 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:49.363 "listen_address": { 00:09:49.363 "trtype": "tcp", 00:09:49.363 "traddr": "", 00:09:49.363 "trsvcid": "4421" 00:09:49.363 }, 00:09:49.363 "method": "nvmf_subsystem_remove_listener", 00:09:49.363 "req_id": 1 00:09:49.363 } 00:09:49.363 Got JSON-RPC error response 00:09:49.363 response: 00:09:49.363 { 00:09:49.363 "code": -32602, 00:09:49.363 "message": "Invalid parameters" 00:09:49.363 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:49.363 13:40:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2765 -i 0 00:09:49.623 [2024-07-15 13:40:15.958556] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2765: invalid cntlid range [0-65519] 00:09:49.623 13:40:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:49.623 { 00:09:49.623 "nqn": "nqn.2016-06.io.spdk:cnode2765", 00:09:49.623 "min_cntlid": 0, 00:09:49.623 "method": "nvmf_create_subsystem", 00:09:49.623 "req_id": 1 00:09:49.623 } 00:09:49.623 Got JSON-RPC error response 00:09:49.623 response: 00:09:49.623 { 00:09:49.623 "code": -32602, 00:09:49.623 "message": "Invalid cntlid range [0-65519]" 00:09:49.623 }' 00:09:49.623 13:40:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:49.623 { 00:09:49.623 "nqn": "nqn.2016-06.io.spdk:cnode2765", 00:09:49.623 "min_cntlid": 0, 00:09:49.623 "method": "nvmf_create_subsystem", 00:09:49.623 "req_id": 1 00:09:49.623 } 00:09:49.623 Got JSON-RPC error response 00:09:49.623 response: 00:09:49.623 { 00:09:49.623 "code": -32602, 00:09:49.623 "message": "Invalid cntlid range [0-65519]" 00:09:49.623 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:49.623 13:40:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19456 -i 65520 00:09:49.623 [2024-07-15 13:40:16.131093] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19456: invalid cntlid range [65520-65519] 00:09:49.883 13:40:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:49.883 { 00:09:49.883 "nqn": "nqn.2016-06.io.spdk:cnode19456", 00:09:49.883 "min_cntlid": 65520, 00:09:49.883 "method": "nvmf_create_subsystem", 00:09:49.883 "req_id": 1 00:09:49.883 } 00:09:49.883 Got JSON-RPC error response 00:09:49.883 response: 00:09:49.883 { 00:09:49.883 "code": -32602, 00:09:49.883 "message": "Invalid cntlid range [65520-65519]" 00:09:49.883 }' 00:09:49.883 13:40:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:49.883 { 00:09:49.883 "nqn": "nqn.2016-06.io.spdk:cnode19456", 00:09:49.883 "min_cntlid": 65520, 00:09:49.883 "method": "nvmf_create_subsystem", 00:09:49.883 "req_id": 1 00:09:49.883 } 00:09:49.883 Got JSON-RPC error response 00:09:49.883 response: 00:09:49.883 { 00:09:49.883 "code": -32602, 00:09:49.883 "message": "Invalid cntlid range [65520-65519]" 00:09:49.883 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:49.883 13:40:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28216 -I 0 00:09:49.883 [2024-07-15 13:40:16.303668] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28216: invalid cntlid range [1-0] 00:09:49.883 13:40:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:49.883 { 00:09:49.883 "nqn": "nqn.2016-06.io.spdk:cnode28216", 00:09:49.883 "max_cntlid": 0, 00:09:49.883 "method": "nvmf_create_subsystem", 00:09:49.883 "req_id": 1 00:09:49.883 } 00:09:49.883 Got JSON-RPC error response 00:09:49.883 response: 00:09:49.883 { 00:09:49.883 "code": -32602, 00:09:49.883 "message": "Invalid cntlid range [1-0]" 00:09:49.883 }' 00:09:49.883 13:40:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:49.883 { 00:09:49.883 "nqn": "nqn.2016-06.io.spdk:cnode28216", 00:09:49.883 "max_cntlid": 0, 00:09:49.883 "method": "nvmf_create_subsystem", 00:09:49.883 "req_id": 1 00:09:49.883 } 00:09:49.883 Got JSON-RPC error response 00:09:49.883 response: 00:09:49.883 { 00:09:49.883 "code": -32602, 00:09:49.883 "message": "Invalid cntlid range [1-0]" 00:09:49.883 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:49.883 13:40:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16287 -I 65520 00:09:50.143 [2024-07-15 13:40:16.468132] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16287: invalid cntlid range [1-65520] 00:09:50.143 13:40:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:50.143 { 00:09:50.143 "nqn": "nqn.2016-06.io.spdk:cnode16287", 00:09:50.143 "max_cntlid": 65520, 00:09:50.143 "method": "nvmf_create_subsystem", 00:09:50.143 "req_id": 1 00:09:50.143 } 00:09:50.143 Got JSON-RPC error response 00:09:50.143 response: 00:09:50.143 { 00:09:50.143 "code": -32602, 00:09:50.143 "message": "Invalid cntlid range [1-65520]" 00:09:50.143 }' 00:09:50.143 13:40:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:50.143 { 00:09:50.143 "nqn": "nqn.2016-06.io.spdk:cnode16287", 00:09:50.143 "max_cntlid": 65520, 00:09:50.143 "method": "nvmf_create_subsystem", 00:09:50.143 "req_id": 1 00:09:50.143 } 00:09:50.143 Got JSON-RPC error response 00:09:50.143 response: 00:09:50.143 { 00:09:50.143 "code": -32602, 00:09:50.143 "message": "Invalid cntlid range [1-65520]" 00:09:50.143 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:50.143 13:40:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25429 -i 6 -I 5 00:09:50.143 [2024-07-15 13:40:16.640680] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25429: invalid cntlid range [6-5] 00:09:50.403 13:40:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:50.403 { 00:09:50.403 "nqn": "nqn.2016-06.io.spdk:cnode25429", 00:09:50.403 "min_cntlid": 6, 00:09:50.403 "max_cntlid": 5, 00:09:50.403 "method": "nvmf_create_subsystem", 00:09:50.403 "req_id": 1 00:09:50.403 } 00:09:50.403 Got JSON-RPC error response 00:09:50.403 response: 00:09:50.403 { 00:09:50.403 "code": -32602, 00:09:50.403 "message": "Invalid cntlid range [6-5]" 00:09:50.403 }' 00:09:50.403 13:40:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:50.403 { 00:09:50.403 "nqn": "nqn.2016-06.io.spdk:cnode25429", 00:09:50.403 "min_cntlid": 6, 00:09:50.403 "max_cntlid": 5, 00:09:50.403 "method": "nvmf_create_subsystem", 00:09:50.403 "req_id": 1 00:09:50.403 } 00:09:50.403 Got JSON-RPC error response 00:09:50.403 response: 00:09:50.403 { 00:09:50.403 "code": -32602, 00:09:50.403 "message": "Invalid cntlid range [6-5]" 00:09:50.403 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:50.403 13:40:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:50.403 13:40:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:50.403 { 00:09:50.403 "name": "foobar", 00:09:50.403 "method": "nvmf_delete_target", 00:09:50.403 "req_id": 1 00:09:50.403 } 00:09:50.403 Got JSON-RPC error response 00:09:50.403 response: 00:09:50.403 { 00:09:50.403 "code": -32602, 00:09:50.403 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:50.403 }' 00:09:50.403 13:40:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:50.403 { 00:09:50.403 "name": "foobar", 00:09:50.403 "method": "nvmf_delete_target", 00:09:50.403 "req_id": 1 00:09:50.403 } 00:09:50.403 Got JSON-RPC error response 00:09:50.403 response: 00:09:50.403 { 00:09:50.403 "code": -32602, 00:09:50.403 "message": "The specified target doesn't exist, cannot delete it." 00:09:50.403 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:50.403 13:40:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:50.403 13:40:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:50.403 13:40:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:50.403 13:40:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:50.403 13:40:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:50.403 13:40:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:50.403 13:40:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:50.403 13:40:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:50.403 rmmod nvme_tcp 00:09:50.403 rmmod nvme_fabrics 00:09:50.403 rmmod nvme_keyring 00:09:50.403 13:40:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:50.403 13:40:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:50.403 13:40:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:50.403 13:40:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 944486 ']' 00:09:50.403 13:40:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 944486 00:09:50.403 13:40:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 944486 ']' 00:09:50.404 13:40:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 944486 00:09:50.404 13:40:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:50.404 13:40:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:50.404 13:40:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 944486 00:09:50.404 13:40:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:50.404 13:40:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:50.404 13:40:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 944486' 00:09:50.404 killing process with pid 944486 00:09:50.404 13:40:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 944486 00:09:50.404 13:40:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 944486 00:09:50.663 13:40:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:50.663 13:40:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:50.663 13:40:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:50.663 13:40:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:50.663 13:40:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:50.663 13:40:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.664 13:40:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.664 13:40:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.578 13:40:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:52.578 00:09:52.578 real 0m13.271s 00:09:52.578 user 0m19.114s 00:09:52.578 sys 0m6.164s 00:09:52.578 13:40:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:52.578 13:40:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:52.578 ************************************ 00:09:52.578 END TEST nvmf_invalid 00:09:52.578 ************************************ 00:09:52.839 13:40:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:52.839 13:40:19 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:52.839 13:40:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:52.839 13:40:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.839 13:40:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:52.839 ************************************ 00:09:52.839 START TEST nvmf_abort 00:09:52.839 ************************************ 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:52.839 * Looking for test storage... 00:09:52.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.839 13:40:19 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:52.840 13:40:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:00.990 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:00.990 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:00.990 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:00.990 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:00.990 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:00.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:10:00.991 00:10:00.991 --- 10.0.0.2 ping statistics --- 00:10:00.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.991 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.392 ms 00:10:00.991 00:10:00.991 --- 10.0.0.1 ping statistics --- 00:10:00.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.991 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=949526 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 949526 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 949526 ']' 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:00.991 13:40:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.991 [2024-07-15 13:40:26.574880] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:00.991 [2024-07-15 13:40:26.574951] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.991 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.991 [2024-07-15 13:40:26.662946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:00.991 [2024-07-15 13:40:26.756645] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.991 [2024-07-15 13:40:26.756702] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.991 [2024-07-15 13:40:26.756710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.991 [2024-07-15 13:40:26.756718] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.991 [2024-07-15 13:40:26.756724] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.991 [2024-07-15 13:40:26.756890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.991 [2024-07-15 13:40:26.757057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.991 [2024-07-15 13:40:26.757058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.991 [2024-07-15 13:40:27.402082] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.991 Malloc0 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.991 Delay0 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.991 [2024-07-15 13:40:27.483505] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.991 13:40:27 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:01.252 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.252 [2024-07-15 13:40:27.561924] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:03.191 Initializing NVMe Controllers 00:10:03.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:03.191 controller IO queue size 128 less than required 00:10:03.191 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:03.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:03.192 Initialization complete. Launching workers. 00:10:03.192 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32606 00:10:03.192 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32667, failed to submit 62 00:10:03.192 success 32610, unsuccess 57, failed 0 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:03.192 rmmod nvme_tcp 00:10:03.192 rmmod nvme_fabrics 00:10:03.192 rmmod nvme_keyring 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 949526 ']' 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 949526 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 949526 ']' 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 949526 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:03.192 13:40:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:03.452 13:40:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 949526 00:10:03.452 13:40:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:03.452 13:40:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:03.452 13:40:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 949526' 00:10:03.452 killing process with pid 949526 00:10:03.452 13:40:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 949526 00:10:03.452 13:40:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 949526 00:10:03.452 13:40:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:03.452 13:40:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:03.452 13:40:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:03.452 13:40:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:03.452 13:40:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:03.452 13:40:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.452 13:40:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:03.452 13:40:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.996 13:40:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:05.996 00:10:05.996 real 0m12.822s 00:10:05.996 user 0m13.250s 00:10:05.996 sys 0m6.180s 00:10:05.996 13:40:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.996 13:40:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:05.996 ************************************ 00:10:05.996 END TEST nvmf_abort 00:10:05.996 ************************************ 00:10:05.996 13:40:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:05.996 13:40:32 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:05.996 13:40:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:05.996 13:40:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.996 13:40:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:05.996 ************************************ 00:10:05.996 START TEST nvmf_ns_hotplug_stress 00:10:05.996 ************************************ 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:05.996 * Looking for test storage... 00:10:05.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:05.996 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:05.997 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:05.997 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:05.997 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:05.997 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.997 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:05.997 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:05.997 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:05.997 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.997 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:05.997 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.997 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:05.997 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:05.997 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:05.997 13:40:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.581 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.581 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:12.581 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:12.581 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:12.581 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:12.581 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:12.581 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:12.581 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:12.581 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:12.581 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:12.581 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:12.581 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:12.581 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:12.581 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:12.581 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:12.581 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.581 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.581 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:12.582 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:12.582 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:12.582 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:12.582 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.582 13:40:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.582 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.582 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.582 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:12.582 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:12.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:10:12.843 00:10:12.843 --- 10.0.0.2 ping statistics --- 00:10:12.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.843 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:10:12.843 00:10:12.843 --- 10.0.0.1 ping statistics --- 00:10:12.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.843 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=954345 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 954345 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 954345 ']' 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.843 13:40:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.843 [2024-07-15 13:40:39.302192] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:12.843 [2024-07-15 13:40:39.302267] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.843 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.104 [2024-07-15 13:40:39.387282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:13.104 [2024-07-15 13:40:39.464349] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.104 [2024-07-15 13:40:39.464397] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.104 [2024-07-15 13:40:39.464405] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.104 [2024-07-15 13:40:39.464412] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.104 [2024-07-15 13:40:39.464418] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.104 [2024-07-15 13:40:39.464542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.104 [2024-07-15 13:40:39.464701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.104 [2024-07-15 13:40:39.464702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.675 13:40:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.675 13:40:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:13.675 13:40:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:13.675 13:40:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:13.675 13:40:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.675 13:40:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.675 13:40:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:13.675 13:40:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:13.937 [2024-07-15 13:40:40.248798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.937 13:40:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:13.937 13:40:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.199 [2024-07-15 13:40:40.582322] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.199 13:40:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:14.460 13:40:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:14.460 Malloc0 00:10:14.460 13:40:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:14.721 Delay0 00:10:14.721 13:40:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.982 13:40:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:14.982 NULL1 00:10:14.982 13:40:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:15.243 13:40:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:15.243 13:40:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=954727 00:10:15.243 13:40:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:15.243 13:40:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.243 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.504 13:40:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.504 13:40:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:15.504 13:40:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:15.765 true 00:10:15.765 13:40:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:15.765 13:40:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.027 13:40:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.027 13:40:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:16.027 13:40:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:16.316 true 00:10:16.316 13:40:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:16.316 13:40:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.591 13:40:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.591 13:40:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:16.591 13:40:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:16.864 true 00:10:16.864 13:40:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:16.864 13:40:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.864 13:40:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.129 13:40:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:17.129 13:40:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:17.129 true 00:10:17.391 13:40:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:17.391 13:40:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.391 13:40:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.656 13:40:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:17.656 13:40:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:17.656 true 00:10:17.919 13:40:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:17.919 13:40:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.919 13:40:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.180 13:40:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:18.180 13:40:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:18.180 true 00:10:18.180 13:40:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:18.180 13:40:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.441 13:40:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.702 13:40:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:18.702 13:40:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:18.702 true 00:10:18.702 13:40:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:18.702 13:40:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.963 13:40:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.224 13:40:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:19.224 13:40:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:19.224 true 00:10:19.224 13:40:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:19.224 13:40:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.485 13:40:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.485 13:40:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:19.485 13:40:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:19.745 true 00:10:19.745 13:40:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:19.745 13:40:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.006 13:40:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.006 13:40:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:20.006 13:40:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:20.266 true 00:10:20.267 13:40:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:20.267 13:40:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.267 13:40:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.527 13:40:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:20.527 13:40:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:20.788 true 00:10:20.788 13:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:20.788 13:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.788 13:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.049 13:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:21.049 13:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:21.049 true 00:10:21.309 13:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:21.309 13:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.309 13:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.570 13:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:21.570 13:40:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:21.570 true 00:10:21.570 13:40:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:21.570 13:40:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.838 13:40:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.099 13:40:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:22.099 13:40:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:22.099 true 00:10:22.099 13:40:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:22.099 13:40:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.359 13:40:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.359 13:40:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:22.359 13:40:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:22.619 true 00:10:22.619 13:40:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:22.619 13:40:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.880 13:40:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.880 13:40:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:22.880 13:40:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:23.141 true 00:10:23.141 13:40:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:23.141 13:40:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.401 13:40:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.401 13:40:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:23.401 13:40:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:23.662 true 00:10:23.662 13:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:23.662 13:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.923 13:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.923 13:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:23.923 13:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:24.184 true 00:10:24.184 13:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:24.184 13:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.184 13:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.444 13:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:24.444 13:40:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:24.704 true 00:10:24.704 13:40:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:24.704 13:40:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.704 13:40:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.964 13:40:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:24.964 13:40:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:25.223 true 00:10:25.223 13:40:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:25.223 13:40:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.223 13:40:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.483 13:40:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:25.483 13:40:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:25.483 true 00:10:25.744 13:40:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:25.744 13:40:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.744 13:40:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.006 13:40:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:26.006 13:40:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:26.006 true 00:10:26.006 13:40:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:26.006 13:40:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.266 13:40:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.527 13:40:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:26.527 13:40:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:26.527 true 00:10:26.527 13:40:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:26.527 13:40:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.787 13:40:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.046 13:40:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:27.046 13:40:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:27.046 true 00:10:27.046 13:40:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:27.046 13:40:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.306 13:40:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.306 13:40:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:27.306 13:40:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:27.586 true 00:10:27.586 13:40:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:27.586 13:40:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.847 13:40:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.847 13:40:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:27.847 13:40:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:28.108 true 00:10:28.108 13:40:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:28.108 13:40:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.369 13:40:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.369 13:40:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:28.369 13:40:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:28.630 true 00:10:28.630 13:40:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:28.630 13:40:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.630 13:40:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.891 13:40:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:28.891 13:40:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:29.152 true 00:10:29.152 13:40:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:29.152 13:40:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.152 13:40:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.413 13:40:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:29.413 13:40:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:29.674 true 00:10:29.674 13:40:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:29.674 13:40:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.674 13:40:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.935 13:40:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:29.935 13:40:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:29.935 true 00:10:30.196 13:40:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:30.196 13:40:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.196 13:40:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.457 13:40:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:30.457 13:40:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:30.457 true 00:10:30.718 13:40:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:30.718 13:40:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.718 13:40:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.979 13:40:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:30.979 13:40:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:30.979 true 00:10:30.979 13:40:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:30.979 13:40:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.239 13:40:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.499 13:40:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:31.499 13:40:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:31.499 true 00:10:31.499 13:40:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:31.500 13:40:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.760 13:40:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.760 13:40:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:31.760 13:40:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:32.021 true 00:10:32.021 13:40:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:32.022 13:40:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.283 13:40:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.283 13:40:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:32.283 13:40:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:32.578 true 00:10:32.578 13:40:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:32.578 13:40:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.578 13:40:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.839 13:40:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:32.839 13:40:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:33.101 true 00:10:33.101 13:40:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:33.101 13:40:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.101 13:40:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.362 13:40:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:33.362 13:40:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:33.362 true 00:10:33.362 13:40:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:33.362 13:40:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.622 13:41:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.882 13:41:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:33.882 13:41:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:33.882 true 00:10:33.882 13:41:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:33.882 13:41:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.143 13:41:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.143 13:41:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:34.143 13:41:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:34.403 true 00:10:34.403 13:41:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:34.403 13:41:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.663 13:41:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.663 13:41:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:34.663 13:41:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:34.923 true 00:10:34.923 13:41:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:34.923 13:41:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.182 13:41:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.182 13:41:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:35.182 13:41:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:35.442 true 00:10:35.442 13:41:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:35.442 13:41:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.702 13:41:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.702 13:41:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:35.702 13:41:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:35.961 true 00:10:35.961 13:41:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:35.961 13:41:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.221 13:41:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.221 13:41:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:36.221 13:41:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:36.481 true 00:10:36.481 13:41:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:36.481 13:41:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.481 13:41:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.741 13:41:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:36.741 13:41:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:37.001 true 00:10:37.001 13:41:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:37.001 13:41:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.001 13:41:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.260 13:41:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:37.260 13:41:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:37.260 true 00:10:37.521 13:41:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:37.521 13:41:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.521 13:41:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.782 13:41:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:37.782 13:41:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:37.782 true 00:10:38.043 13:41:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:38.043 13:41:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.043 13:41:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.303 13:41:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:38.303 13:41:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:38.303 true 00:10:38.303 13:41:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:38.303 13:41:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.564 13:41:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.824 13:41:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:38.824 13:41:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:38.824 true 00:10:38.824 13:41:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:38.824 13:41:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.086 13:41:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.347 13:41:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:39.347 13:41:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:39.347 true 00:10:39.347 13:41:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:39.347 13:41:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.609 13:41:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.609 13:41:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:39.609 13:41:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:39.870 true 00:10:39.870 13:41:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:39.870 13:41:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.131 13:41:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.131 13:41:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:40.131 13:41:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:40.392 true 00:10:40.392 13:41:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:40.392 13:41:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.653 13:41:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.653 13:41:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:40.653 13:41:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:40.914 true 00:10:40.914 13:41:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:40.914 13:41:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.914 13:41:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.175 13:41:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:10:41.175 13:41:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:41.436 true 00:10:41.436 13:41:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:41.436 13:41:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.436 13:41:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.697 13:41:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:10:41.697 13:41:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:10:41.958 true 00:10:41.958 13:41:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:41.958 13:41:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.958 13:41:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.219 13:41:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:10:42.219 13:41:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:10:42.480 true 00:10:42.480 13:41:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:42.480 13:41:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.480 13:41:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.741 13:41:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:10:42.741 13:41:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:10:42.741 true 00:10:43.003 13:41:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:43.003 13:41:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.003 13:41:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.263 13:41:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:10:43.263 13:41:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:10:43.263 true 00:10:43.263 13:41:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:43.263 13:41:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.524 13:41:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.783 13:41:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:10:43.783 13:41:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:10:43.783 true 00:10:43.783 13:41:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:43.783 13:41:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.043 13:41:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.302 13:41:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:10:44.302 13:41:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:10:44.302 true 00:10:44.302 13:41:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:44.302 13:41:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.564 13:41:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.825 13:41:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:10:44.825 13:41:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:10:44.825 true 00:10:44.825 13:41:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:44.825 13:41:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.085 13:41:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.362 13:41:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:10:45.362 13:41:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:10:45.362 true 00:10:45.362 13:41:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:45.362 13:41:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.362 Initializing NVMe Controllers 00:10:45.362 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:45.362 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:10:45.362 Controller IO queue size 128, less than required. 00:10:45.362 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:45.362 WARNING: Some requested NVMe devices were skipped 00:10:45.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:45.362 Initialization complete. Launching workers. 00:10:45.362 ======================================================== 00:10:45.362 Latency(us) 00:10:45.362 Device Information : IOPS MiB/s Average min max 00:10:45.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30655.79 14.97 4175.31 2332.58 9919.07 00:10:45.362 ======================================================== 00:10:45.362 Total : 30655.79 14.97 4175.31 2332.58 9919.07 00:10:45.362 00:10:45.674 13:41:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.674 13:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1062 00:10:45.674 13:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1062 00:10:45.936 true 00:10:45.936 13:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 954727 00:10:45.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (954727) - No such process 00:10:45.936 13:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 954727 00:10:45.936 13:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.936 13:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:46.197 13:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:46.197 13:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:46.197 13:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:46.197 13:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:46.197 13:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:46.458 null0 00:10:46.458 13:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:46.458 13:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:46.458 13:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:46.458 null1 00:10:46.458 13:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:46.458 13:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:46.458 13:41:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:46.718 null2 00:10:46.718 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:46.718 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:46.718 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:46.978 null3 00:10:46.978 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:46.978 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:46.978 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:46.978 null4 00:10:46.978 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:46.978 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:46.978 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:47.238 null5 00:10:47.238 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:47.238 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:47.238 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:47.238 null6 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:47.498 null7 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:47.498 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:47.499 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.499 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:47.499 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:47.499 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:47.499 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:47.499 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:47.499 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:47.499 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:47.499 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.499 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:47.499 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:47.499 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:47.499 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:47.499 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:47.499 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 961323 961325 961329 961330 961334 961337 961340 961342 00:10:47.499 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:47.499 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:47.499 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.499 13:41:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:47.759 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:47.759 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:47.759 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:47.759 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.759 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:47.759 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:47.759 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:47.759 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:47.759 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.759 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.759 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:47.759 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.759 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.759 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:48.021 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:48.022 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:48.022 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:48.283 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.545 13:41:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:48.545 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.545 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.545 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:48.545 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.545 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.545 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:48.806 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:48.806 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.806 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:48.806 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:48.806 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:48.806 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:48.806 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:48.806 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:48.806 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.806 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.806 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:48.806 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.807 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.807 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:48.807 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.807 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.807 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:48.807 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.807 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.807 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:48.807 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.807 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.807 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:48.807 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.807 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.807 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:49.067 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.067 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.067 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:49.067 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.067 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.067 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:49.067 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:49.067 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.067 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:49.067 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:49.067 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:49.067 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:49.067 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:49.067 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:49.067 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.067 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.067 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:49.329 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:49.592 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:49.592 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.592 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.592 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:49.592 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.592 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.592 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:49.592 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.592 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.592 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:49.592 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.592 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.592 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:49.592 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.592 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.592 13:41:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:49.592 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.592 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.592 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.592 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.592 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:49.592 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:49.592 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.592 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.592 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:49.592 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:49.853 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:49.853 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:49.853 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:49.853 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.854 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:50.114 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.114 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.114 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:50.114 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.114 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.114 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:50.114 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:50.114 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:50.114 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:50.114 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:50.114 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:50.114 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:50.114 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.114 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.114 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.114 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:50.114 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:50.375 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:50.635 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.635 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.635 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:50.635 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:50.635 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.635 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:50.635 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.635 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.635 13:41:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:50.635 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.635 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.635 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:50.635 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.635 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.635 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:50.635 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.635 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.635 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:50.635 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:50.635 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.635 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.635 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:50.635 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.635 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.635 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:50.635 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:50.636 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.636 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.636 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:50.895 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:50.895 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:50.895 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:50.895 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.895 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.895 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:50.895 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.895 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.895 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.896 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:50.896 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.896 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.896 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.896 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.896 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.896 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.156 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.156 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.156 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.156 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.156 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.156 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:51.157 rmmod nvme_tcp 00:10:51.157 rmmod nvme_fabrics 00:10:51.157 rmmod nvme_keyring 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 954345 ']' 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 954345 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 954345 ']' 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 954345 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 954345 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 954345' 00:10:51.157 killing process with pid 954345 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 954345 00:10:51.157 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 954345 00:10:51.417 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:51.417 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:51.417 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:51.417 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:51.417 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:51.417 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.417 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:51.417 13:41:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.329 13:41:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:53.329 00:10:53.329 real 0m47.728s 00:10:53.329 user 3m15.292s 00:10:53.329 sys 0m16.196s 00:10:53.329 13:41:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:53.329 13:41:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.329 ************************************ 00:10:53.329 END TEST nvmf_ns_hotplug_stress 00:10:53.329 ************************************ 00:10:53.329 13:41:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:53.329 13:41:19 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:53.329 13:41:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:53.329 13:41:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.329 13:41:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:53.591 ************************************ 00:10:53.591 START TEST nvmf_connect_stress 00:10:53.591 ************************************ 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:53.591 * Looking for test storage... 00:10:53.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.591 13:41:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.591 13:41:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:53.591 13:41:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:53.591 13:41:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:53.591 13:41:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.206 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.206 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:00.206 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:00.206 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:00.206 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:00.206 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:00.206 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:00.206 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:00.206 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:00.206 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:00.206 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:00.206 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:00.206 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:00.206 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:00.206 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:00.206 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:00.207 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:00.207 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:00.207 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:00.207 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:00.207 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:00.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:11:00.469 00:11:00.469 --- 10.0.0.2 ping statistics --- 00:11:00.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.469 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.484 ms 00:11:00.469 00:11:00.469 --- 10.0.0.1 ping statistics --- 00:11:00.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.469 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=966422 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 966422 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 966422 ']' 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.469 13:41:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.470 13:41:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.470 [2024-07-15 13:41:26.902795] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:00.470 [2024-07-15 13:41:26.902845] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.470 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.470 [2024-07-15 13:41:26.986312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:00.731 [2024-07-15 13:41:27.061095] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.731 [2024-07-15 13:41:27.061150] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.731 [2024-07-15 13:41:27.061158] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.731 [2024-07-15 13:41:27.061165] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.731 [2024-07-15 13:41:27.061171] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.731 [2024-07-15 13:41:27.061286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.731 [2024-07-15 13:41:27.061563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.731 [2024-07-15 13:41:27.061563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.303 [2024-07-15 13:41:27.727271] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.303 [2024-07-15 13:41:27.768285] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.303 NULL1 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=966537 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.303 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.303 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.564 13:41:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.825 13:41:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.825 13:41:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:01.825 13:41:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.825 13:41:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.825 13:41:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.085 13:41:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.085 13:41:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:02.085 13:41:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.085 13:41:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.085 13:41:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.345 13:41:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.345 13:41:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:02.345 13:41:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.345 13:41:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.345 13:41:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.917 13:41:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.917 13:41:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:02.917 13:41:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.917 13:41:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.917 13:41:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.177 13:41:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.177 13:41:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:03.177 13:41:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.177 13:41:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.177 13:41:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.438 13:41:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.438 13:41:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:03.438 13:41:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.438 13:41:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.438 13:41:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.697 13:41:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.698 13:41:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:03.698 13:41:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.698 13:41:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.698 13:41:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.268 13:41:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.268 13:41:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:04.268 13:41:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.268 13:41:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.268 13:41:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.565 13:41:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.565 13:41:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:04.565 13:41:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.565 13:41:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.565 13:41:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.825 13:41:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.825 13:41:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:04.825 13:41:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.825 13:41:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.825 13:41:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.085 13:41:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.085 13:41:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:05.085 13:41:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.085 13:41:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.085 13:41:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.346 13:41:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.346 13:41:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:05.346 13:41:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.346 13:41:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.346 13:41:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.607 13:41:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.607 13:41:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:05.607 13:41:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.607 13:41:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.607 13:41:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.178 13:41:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.178 13:41:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:06.178 13:41:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.178 13:41:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.178 13:41:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.438 13:41:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.438 13:41:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:06.439 13:41:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.439 13:41:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.439 13:41:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.698 13:41:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.698 13:41:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:06.698 13:41:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.698 13:41:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.698 13:41:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.959 13:41:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.959 13:41:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:06.959 13:41:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.959 13:41:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.959 13:41:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.219 13:41:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.219 13:41:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:07.219 13:41:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.219 13:41:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.219 13:41:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.790 13:41:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.790 13:41:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:07.790 13:41:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.790 13:41:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.790 13:41:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.051 13:41:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.051 13:41:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:08.051 13:41:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.051 13:41:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.051 13:41:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.311 13:41:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.311 13:41:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:08.311 13:41:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.311 13:41:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.311 13:41:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.572 13:41:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.572 13:41:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:08.572 13:41:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.572 13:41:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.572 13:41:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.144 13:41:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.144 13:41:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:09.144 13:41:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.144 13:41:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.144 13:41:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.404 13:41:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.404 13:41:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:09.404 13:41:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.404 13:41:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.404 13:41:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.665 13:41:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.665 13:41:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:09.665 13:41:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.665 13:41:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.665 13:41:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.928 13:41:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.928 13:41:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:09.928 13:41:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.928 13:41:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.928 13:41:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.189 13:41:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.189 13:41:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:10.189 13:41:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.189 13:41:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.189 13:41:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.760 13:41:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.760 13:41:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:10.760 13:41:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.760 13:41:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.760 13:41:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.020 13:41:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.020 13:41:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:11.020 13:41:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.020 13:41:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.020 13:41:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.280 13:41:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.280 13:41:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:11.280 13:41:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.280 13:41:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.280 13:41:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.540 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:11.540 13:41:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.540 13:41:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 966537 00:11:11.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (966537) - No such process 00:11:11.540 13:41:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 966537 00:11:11.540 13:41:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:11.540 13:41:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:11.540 13:41:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:11.540 13:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:11.540 13:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:11.540 13:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:11.540 13:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:11.540 13:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:11.540 13:41:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:11.540 rmmod nvme_tcp 00:11:11.540 rmmod nvme_fabrics 00:11:11.540 rmmod nvme_keyring 00:11:11.540 13:41:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:11.540 13:41:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:11.540 13:41:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:11.540 13:41:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 966422 ']' 00:11:11.540 13:41:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 966422 00:11:11.540 13:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 966422 ']' 00:11:11.540 13:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 966422 00:11:11.540 13:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:11.801 13:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:11.801 13:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 966422 00:11:11.801 13:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:11.801 13:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:11.801 13:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 966422' 00:11:11.801 killing process with pid 966422 00:11:11.801 13:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 966422 00:11:11.801 13:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 966422 00:11:11.801 13:41:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:11.801 13:41:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:11.801 13:41:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:11.801 13:41:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:11.801 13:41:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:11.801 13:41:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.801 13:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:11.801 13:41:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.342 13:41:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:14.342 00:11:14.342 real 0m20.438s 00:11:14.342 user 0m42.027s 00:11:14.342 sys 0m8.275s 00:11:14.342 13:41:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:14.342 13:41:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.342 ************************************ 00:11:14.342 END TEST nvmf_connect_stress 00:11:14.342 ************************************ 00:11:14.342 13:41:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:14.342 13:41:40 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:14.342 13:41:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:14.342 13:41:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.342 13:41:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:14.342 ************************************ 00:11:14.342 START TEST nvmf_fused_ordering 00:11:14.342 ************************************ 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:14.342 * Looking for test storage... 00:11:14.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:14.342 13:41:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:20.926 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:20.926 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:20.926 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:20.926 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:20.926 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.187 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.187 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.187 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:21.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:11:21.187 00:11:21.187 --- 10.0.0.2 ping statistics --- 00:11:21.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.187 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:11:21.187 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.467 ms 00:11:21.187 00:11:21.187 --- 10.0.0.1 ping statistics --- 00:11:21.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.187 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:11:21.187 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.187 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:21.187 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:21.187 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.187 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:21.187 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:21.187 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.187 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:21.187 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:21.187 13:41:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:21.187 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:21.187 13:41:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:21.187 13:41:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.187 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=972794 00:11:21.187 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:21.188 13:41:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 972794 00:11:21.188 13:41:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 972794 ']' 00:11:21.188 13:41:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.188 13:41:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:21.188 13:41:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.188 13:41:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:21.188 13:41:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.188 [2024-07-15 13:41:47.614063] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:21.188 [2024-07-15 13:41:47.614129] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.188 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.188 [2024-07-15 13:41:47.698349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.449 [2024-07-15 13:41:47.790749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.449 [2024-07-15 13:41:47.790813] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.449 [2024-07-15 13:41:47.790821] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.449 [2024-07-15 13:41:47.790827] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.449 [2024-07-15 13:41:47.790833] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.449 [2024-07-15 13:41:47.790870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.021 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.021 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:22.021 13:41:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:22.021 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:22.021 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.021 13:41:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.021 13:41:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:22.021 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.021 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.021 [2024-07-15 13:41:48.447871] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.021 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.021 13:41:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:22.021 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.021 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.021 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.021 13:41:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.022 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.022 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.022 [2024-07-15 13:41:48.464113] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.022 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.022 13:41:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:22.022 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.022 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.022 NULL1 00:11:22.022 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.022 13:41:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:22.022 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.022 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.022 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.022 13:41:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:22.022 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.022 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.022 13:41:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.022 13:41:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:22.022 [2024-07-15 13:41:48.521657] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:22.022 [2024-07-15 13:41:48.521710] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid972866 ] 00:11:22.283 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.853 Attached to nqn.2016-06.io.spdk:cnode1 00:11:22.853 Namespace ID: 1 size: 1GB 00:11:22.853 fused_ordering(0) 00:11:22.853 fused_ordering(1) 00:11:22.853 fused_ordering(2) 00:11:22.853 fused_ordering(3) 00:11:22.853 fused_ordering(4) 00:11:22.853 fused_ordering(5) 00:11:22.853 fused_ordering(6) 00:11:22.853 fused_ordering(7) 00:11:22.853 fused_ordering(8) 00:11:22.853 fused_ordering(9) 00:11:22.853 fused_ordering(10) 00:11:22.853 fused_ordering(11) 00:11:22.853 fused_ordering(12) 00:11:22.853 fused_ordering(13) 00:11:22.853 fused_ordering(14) 00:11:22.853 fused_ordering(15) 00:11:22.853 fused_ordering(16) 00:11:22.853 fused_ordering(17) 00:11:22.853 fused_ordering(18) 00:11:22.853 fused_ordering(19) 00:11:22.853 fused_ordering(20) 00:11:22.853 fused_ordering(21) 00:11:22.853 fused_ordering(22) 00:11:22.853 fused_ordering(23) 00:11:22.853 fused_ordering(24) 00:11:22.853 fused_ordering(25) 00:11:22.853 fused_ordering(26) 00:11:22.853 fused_ordering(27) 00:11:22.853 fused_ordering(28) 00:11:22.853 fused_ordering(29) 00:11:22.853 fused_ordering(30) 00:11:22.853 fused_ordering(31) 00:11:22.853 fused_ordering(32) 00:11:22.853 fused_ordering(33) 00:11:22.853 fused_ordering(34) 00:11:22.853 fused_ordering(35) 00:11:22.853 fused_ordering(36) 00:11:22.853 fused_ordering(37) 00:11:22.853 fused_ordering(38) 00:11:22.853 fused_ordering(39) 00:11:22.853 fused_ordering(40) 00:11:22.853 fused_ordering(41) 00:11:22.853 fused_ordering(42) 00:11:22.853 fused_ordering(43) 00:11:22.853 fused_ordering(44) 00:11:22.853 fused_ordering(45) 00:11:22.853 fused_ordering(46) 00:11:22.853 fused_ordering(47) 00:11:22.853 fused_ordering(48) 00:11:22.853 fused_ordering(49) 00:11:22.853 fused_ordering(50) 00:11:22.853 fused_ordering(51) 00:11:22.853 fused_ordering(52) 00:11:22.853 fused_ordering(53) 00:11:22.853 fused_ordering(54) 00:11:22.853 fused_ordering(55) 00:11:22.853 fused_ordering(56) 00:11:22.853 fused_ordering(57) 00:11:22.853 fused_ordering(58) 00:11:22.853 fused_ordering(59) 00:11:22.853 fused_ordering(60) 00:11:22.853 fused_ordering(61) 00:11:22.853 fused_ordering(62) 00:11:22.853 fused_ordering(63) 00:11:22.853 fused_ordering(64) 00:11:22.853 fused_ordering(65) 00:11:22.853 fused_ordering(66) 00:11:22.853 fused_ordering(67) 00:11:22.853 fused_ordering(68) 00:11:22.853 fused_ordering(69) 00:11:22.853 fused_ordering(70) 00:11:22.853 fused_ordering(71) 00:11:22.853 fused_ordering(72) 00:11:22.853 fused_ordering(73) 00:11:22.853 fused_ordering(74) 00:11:22.853 fused_ordering(75) 00:11:22.853 fused_ordering(76) 00:11:22.853 fused_ordering(77) 00:11:22.853 fused_ordering(78) 00:11:22.853 fused_ordering(79) 00:11:22.853 fused_ordering(80) 00:11:22.853 fused_ordering(81) 00:11:22.853 fused_ordering(82) 00:11:22.853 fused_ordering(83) 00:11:22.853 fused_ordering(84) 00:11:22.853 fused_ordering(85) 00:11:22.853 fused_ordering(86) 00:11:22.853 fused_ordering(87) 00:11:22.853 fused_ordering(88) 00:11:22.853 fused_ordering(89) 00:11:22.853 fused_ordering(90) 00:11:22.853 fused_ordering(91) 00:11:22.853 fused_ordering(92) 00:11:22.853 fused_ordering(93) 00:11:22.853 fused_ordering(94) 00:11:22.853 fused_ordering(95) 00:11:22.853 fused_ordering(96) 00:11:22.853 fused_ordering(97) 00:11:22.853 fused_ordering(98) 00:11:22.853 fused_ordering(99) 00:11:22.853 fused_ordering(100) 00:11:22.853 fused_ordering(101) 00:11:22.853 fused_ordering(102) 00:11:22.853 fused_ordering(103) 00:11:22.853 fused_ordering(104) 00:11:22.853 fused_ordering(105) 00:11:22.853 fused_ordering(106) 00:11:22.853 fused_ordering(107) 00:11:22.853 fused_ordering(108) 00:11:22.853 fused_ordering(109) 00:11:22.853 fused_ordering(110) 00:11:22.853 fused_ordering(111) 00:11:22.853 fused_ordering(112) 00:11:22.853 fused_ordering(113) 00:11:22.853 fused_ordering(114) 00:11:22.853 fused_ordering(115) 00:11:22.853 fused_ordering(116) 00:11:22.853 fused_ordering(117) 00:11:22.853 fused_ordering(118) 00:11:22.853 fused_ordering(119) 00:11:22.853 fused_ordering(120) 00:11:22.853 fused_ordering(121) 00:11:22.853 fused_ordering(122) 00:11:22.853 fused_ordering(123) 00:11:22.853 fused_ordering(124) 00:11:22.853 fused_ordering(125) 00:11:22.853 fused_ordering(126) 00:11:22.853 fused_ordering(127) 00:11:22.853 fused_ordering(128) 00:11:22.853 fused_ordering(129) 00:11:22.853 fused_ordering(130) 00:11:22.853 fused_ordering(131) 00:11:22.853 fused_ordering(132) 00:11:22.853 fused_ordering(133) 00:11:22.853 fused_ordering(134) 00:11:22.853 fused_ordering(135) 00:11:22.853 fused_ordering(136) 00:11:22.853 fused_ordering(137) 00:11:22.853 fused_ordering(138) 00:11:22.853 fused_ordering(139) 00:11:22.853 fused_ordering(140) 00:11:22.853 fused_ordering(141) 00:11:22.853 fused_ordering(142) 00:11:22.853 fused_ordering(143) 00:11:22.853 fused_ordering(144) 00:11:22.853 fused_ordering(145) 00:11:22.853 fused_ordering(146) 00:11:22.853 fused_ordering(147) 00:11:22.853 fused_ordering(148) 00:11:22.853 fused_ordering(149) 00:11:22.853 fused_ordering(150) 00:11:22.853 fused_ordering(151) 00:11:22.853 fused_ordering(152) 00:11:22.853 fused_ordering(153) 00:11:22.853 fused_ordering(154) 00:11:22.853 fused_ordering(155) 00:11:22.854 fused_ordering(156) 00:11:22.854 fused_ordering(157) 00:11:22.854 fused_ordering(158) 00:11:22.854 fused_ordering(159) 00:11:22.854 fused_ordering(160) 00:11:22.854 fused_ordering(161) 00:11:22.854 fused_ordering(162) 00:11:22.854 fused_ordering(163) 00:11:22.854 fused_ordering(164) 00:11:22.854 fused_ordering(165) 00:11:22.854 fused_ordering(166) 00:11:22.854 fused_ordering(167) 00:11:22.854 fused_ordering(168) 00:11:22.854 fused_ordering(169) 00:11:22.854 fused_ordering(170) 00:11:22.854 fused_ordering(171) 00:11:22.854 fused_ordering(172) 00:11:22.854 fused_ordering(173) 00:11:22.854 fused_ordering(174) 00:11:22.854 fused_ordering(175) 00:11:22.854 fused_ordering(176) 00:11:22.854 fused_ordering(177) 00:11:22.854 fused_ordering(178) 00:11:22.854 fused_ordering(179) 00:11:22.854 fused_ordering(180) 00:11:22.854 fused_ordering(181) 00:11:22.854 fused_ordering(182) 00:11:22.854 fused_ordering(183) 00:11:22.854 fused_ordering(184) 00:11:22.854 fused_ordering(185) 00:11:22.854 fused_ordering(186) 00:11:22.854 fused_ordering(187) 00:11:22.854 fused_ordering(188) 00:11:22.854 fused_ordering(189) 00:11:22.854 fused_ordering(190) 00:11:22.854 fused_ordering(191) 00:11:22.854 fused_ordering(192) 00:11:22.854 fused_ordering(193) 00:11:22.854 fused_ordering(194) 00:11:22.854 fused_ordering(195) 00:11:22.854 fused_ordering(196) 00:11:22.854 fused_ordering(197) 00:11:22.854 fused_ordering(198) 00:11:22.854 fused_ordering(199) 00:11:22.854 fused_ordering(200) 00:11:22.854 fused_ordering(201) 00:11:22.854 fused_ordering(202) 00:11:22.854 fused_ordering(203) 00:11:22.854 fused_ordering(204) 00:11:22.854 fused_ordering(205) 00:11:23.114 fused_ordering(206) 00:11:23.114 fused_ordering(207) 00:11:23.114 fused_ordering(208) 00:11:23.114 fused_ordering(209) 00:11:23.114 fused_ordering(210) 00:11:23.114 fused_ordering(211) 00:11:23.114 fused_ordering(212) 00:11:23.114 fused_ordering(213) 00:11:23.114 fused_ordering(214) 00:11:23.114 fused_ordering(215) 00:11:23.114 fused_ordering(216) 00:11:23.114 fused_ordering(217) 00:11:23.114 fused_ordering(218) 00:11:23.114 fused_ordering(219) 00:11:23.114 fused_ordering(220) 00:11:23.114 fused_ordering(221) 00:11:23.114 fused_ordering(222) 00:11:23.114 fused_ordering(223) 00:11:23.114 fused_ordering(224) 00:11:23.114 fused_ordering(225) 00:11:23.114 fused_ordering(226) 00:11:23.114 fused_ordering(227) 00:11:23.114 fused_ordering(228) 00:11:23.114 fused_ordering(229) 00:11:23.114 fused_ordering(230) 00:11:23.114 fused_ordering(231) 00:11:23.114 fused_ordering(232) 00:11:23.114 fused_ordering(233) 00:11:23.114 fused_ordering(234) 00:11:23.114 fused_ordering(235) 00:11:23.114 fused_ordering(236) 00:11:23.114 fused_ordering(237) 00:11:23.114 fused_ordering(238) 00:11:23.114 fused_ordering(239) 00:11:23.114 fused_ordering(240) 00:11:23.114 fused_ordering(241) 00:11:23.114 fused_ordering(242) 00:11:23.114 fused_ordering(243) 00:11:23.114 fused_ordering(244) 00:11:23.114 fused_ordering(245) 00:11:23.114 fused_ordering(246) 00:11:23.114 fused_ordering(247) 00:11:23.114 fused_ordering(248) 00:11:23.115 fused_ordering(249) 00:11:23.115 fused_ordering(250) 00:11:23.115 fused_ordering(251) 00:11:23.115 fused_ordering(252) 00:11:23.115 fused_ordering(253) 00:11:23.115 fused_ordering(254) 00:11:23.115 fused_ordering(255) 00:11:23.115 fused_ordering(256) 00:11:23.115 fused_ordering(257) 00:11:23.115 fused_ordering(258) 00:11:23.115 fused_ordering(259) 00:11:23.115 fused_ordering(260) 00:11:23.115 fused_ordering(261) 00:11:23.115 fused_ordering(262) 00:11:23.115 fused_ordering(263) 00:11:23.115 fused_ordering(264) 00:11:23.115 fused_ordering(265) 00:11:23.115 fused_ordering(266) 00:11:23.115 fused_ordering(267) 00:11:23.115 fused_ordering(268) 00:11:23.115 fused_ordering(269) 00:11:23.115 fused_ordering(270) 00:11:23.115 fused_ordering(271) 00:11:23.115 fused_ordering(272) 00:11:23.115 fused_ordering(273) 00:11:23.115 fused_ordering(274) 00:11:23.115 fused_ordering(275) 00:11:23.115 fused_ordering(276) 00:11:23.115 fused_ordering(277) 00:11:23.115 fused_ordering(278) 00:11:23.115 fused_ordering(279) 00:11:23.115 fused_ordering(280) 00:11:23.115 fused_ordering(281) 00:11:23.115 fused_ordering(282) 00:11:23.115 fused_ordering(283) 00:11:23.115 fused_ordering(284) 00:11:23.115 fused_ordering(285) 00:11:23.115 fused_ordering(286) 00:11:23.115 fused_ordering(287) 00:11:23.115 fused_ordering(288) 00:11:23.115 fused_ordering(289) 00:11:23.115 fused_ordering(290) 00:11:23.115 fused_ordering(291) 00:11:23.115 fused_ordering(292) 00:11:23.115 fused_ordering(293) 00:11:23.115 fused_ordering(294) 00:11:23.115 fused_ordering(295) 00:11:23.115 fused_ordering(296) 00:11:23.115 fused_ordering(297) 00:11:23.115 fused_ordering(298) 00:11:23.115 fused_ordering(299) 00:11:23.115 fused_ordering(300) 00:11:23.115 fused_ordering(301) 00:11:23.115 fused_ordering(302) 00:11:23.115 fused_ordering(303) 00:11:23.115 fused_ordering(304) 00:11:23.115 fused_ordering(305) 00:11:23.115 fused_ordering(306) 00:11:23.115 fused_ordering(307) 00:11:23.115 fused_ordering(308) 00:11:23.115 fused_ordering(309) 00:11:23.115 fused_ordering(310) 00:11:23.115 fused_ordering(311) 00:11:23.115 fused_ordering(312) 00:11:23.115 fused_ordering(313) 00:11:23.115 fused_ordering(314) 00:11:23.115 fused_ordering(315) 00:11:23.115 fused_ordering(316) 00:11:23.115 fused_ordering(317) 00:11:23.115 fused_ordering(318) 00:11:23.115 fused_ordering(319) 00:11:23.115 fused_ordering(320) 00:11:23.115 fused_ordering(321) 00:11:23.115 fused_ordering(322) 00:11:23.115 fused_ordering(323) 00:11:23.115 fused_ordering(324) 00:11:23.115 fused_ordering(325) 00:11:23.115 fused_ordering(326) 00:11:23.115 fused_ordering(327) 00:11:23.115 fused_ordering(328) 00:11:23.115 fused_ordering(329) 00:11:23.115 fused_ordering(330) 00:11:23.115 fused_ordering(331) 00:11:23.115 fused_ordering(332) 00:11:23.115 fused_ordering(333) 00:11:23.115 fused_ordering(334) 00:11:23.115 fused_ordering(335) 00:11:23.115 fused_ordering(336) 00:11:23.115 fused_ordering(337) 00:11:23.115 fused_ordering(338) 00:11:23.115 fused_ordering(339) 00:11:23.115 fused_ordering(340) 00:11:23.115 fused_ordering(341) 00:11:23.115 fused_ordering(342) 00:11:23.115 fused_ordering(343) 00:11:23.115 fused_ordering(344) 00:11:23.115 fused_ordering(345) 00:11:23.115 fused_ordering(346) 00:11:23.115 fused_ordering(347) 00:11:23.115 fused_ordering(348) 00:11:23.115 fused_ordering(349) 00:11:23.115 fused_ordering(350) 00:11:23.115 fused_ordering(351) 00:11:23.115 fused_ordering(352) 00:11:23.115 fused_ordering(353) 00:11:23.115 fused_ordering(354) 00:11:23.115 fused_ordering(355) 00:11:23.115 fused_ordering(356) 00:11:23.115 fused_ordering(357) 00:11:23.115 fused_ordering(358) 00:11:23.115 fused_ordering(359) 00:11:23.115 fused_ordering(360) 00:11:23.115 fused_ordering(361) 00:11:23.115 fused_ordering(362) 00:11:23.115 fused_ordering(363) 00:11:23.115 fused_ordering(364) 00:11:23.115 fused_ordering(365) 00:11:23.115 fused_ordering(366) 00:11:23.115 fused_ordering(367) 00:11:23.115 fused_ordering(368) 00:11:23.115 fused_ordering(369) 00:11:23.115 fused_ordering(370) 00:11:23.115 fused_ordering(371) 00:11:23.115 fused_ordering(372) 00:11:23.115 fused_ordering(373) 00:11:23.115 fused_ordering(374) 00:11:23.115 fused_ordering(375) 00:11:23.115 fused_ordering(376) 00:11:23.115 fused_ordering(377) 00:11:23.115 fused_ordering(378) 00:11:23.115 fused_ordering(379) 00:11:23.115 fused_ordering(380) 00:11:23.115 fused_ordering(381) 00:11:23.115 fused_ordering(382) 00:11:23.115 fused_ordering(383) 00:11:23.115 fused_ordering(384) 00:11:23.115 fused_ordering(385) 00:11:23.115 fused_ordering(386) 00:11:23.115 fused_ordering(387) 00:11:23.115 fused_ordering(388) 00:11:23.115 fused_ordering(389) 00:11:23.115 fused_ordering(390) 00:11:23.115 fused_ordering(391) 00:11:23.115 fused_ordering(392) 00:11:23.115 fused_ordering(393) 00:11:23.115 fused_ordering(394) 00:11:23.115 fused_ordering(395) 00:11:23.115 fused_ordering(396) 00:11:23.115 fused_ordering(397) 00:11:23.115 fused_ordering(398) 00:11:23.115 fused_ordering(399) 00:11:23.115 fused_ordering(400) 00:11:23.115 fused_ordering(401) 00:11:23.115 fused_ordering(402) 00:11:23.115 fused_ordering(403) 00:11:23.115 fused_ordering(404) 00:11:23.115 fused_ordering(405) 00:11:23.115 fused_ordering(406) 00:11:23.115 fused_ordering(407) 00:11:23.115 fused_ordering(408) 00:11:23.115 fused_ordering(409) 00:11:23.115 fused_ordering(410) 00:11:23.706 fused_ordering(411) 00:11:23.706 fused_ordering(412) 00:11:23.706 fused_ordering(413) 00:11:23.706 fused_ordering(414) 00:11:23.706 fused_ordering(415) 00:11:23.706 fused_ordering(416) 00:11:23.706 fused_ordering(417) 00:11:23.706 fused_ordering(418) 00:11:23.706 fused_ordering(419) 00:11:23.706 fused_ordering(420) 00:11:23.706 fused_ordering(421) 00:11:23.706 fused_ordering(422) 00:11:23.706 fused_ordering(423) 00:11:23.706 fused_ordering(424) 00:11:23.706 fused_ordering(425) 00:11:23.706 fused_ordering(426) 00:11:23.706 fused_ordering(427) 00:11:23.706 fused_ordering(428) 00:11:23.706 fused_ordering(429) 00:11:23.706 fused_ordering(430) 00:11:23.706 fused_ordering(431) 00:11:23.706 fused_ordering(432) 00:11:23.706 fused_ordering(433) 00:11:23.706 fused_ordering(434) 00:11:23.706 fused_ordering(435) 00:11:23.706 fused_ordering(436) 00:11:23.706 fused_ordering(437) 00:11:23.706 fused_ordering(438) 00:11:23.706 fused_ordering(439) 00:11:23.706 fused_ordering(440) 00:11:23.706 fused_ordering(441) 00:11:23.706 fused_ordering(442) 00:11:23.706 fused_ordering(443) 00:11:23.706 fused_ordering(444) 00:11:23.706 fused_ordering(445) 00:11:23.706 fused_ordering(446) 00:11:23.706 fused_ordering(447) 00:11:23.706 fused_ordering(448) 00:11:23.706 fused_ordering(449) 00:11:23.706 fused_ordering(450) 00:11:23.706 fused_ordering(451) 00:11:23.706 fused_ordering(452) 00:11:23.706 fused_ordering(453) 00:11:23.706 fused_ordering(454) 00:11:23.706 fused_ordering(455) 00:11:23.706 fused_ordering(456) 00:11:23.706 fused_ordering(457) 00:11:23.706 fused_ordering(458) 00:11:23.706 fused_ordering(459) 00:11:23.706 fused_ordering(460) 00:11:23.706 fused_ordering(461) 00:11:23.706 fused_ordering(462) 00:11:23.706 fused_ordering(463) 00:11:23.706 fused_ordering(464) 00:11:23.706 fused_ordering(465) 00:11:23.706 fused_ordering(466) 00:11:23.706 fused_ordering(467) 00:11:23.706 fused_ordering(468) 00:11:23.706 fused_ordering(469) 00:11:23.706 fused_ordering(470) 00:11:23.706 fused_ordering(471) 00:11:23.706 fused_ordering(472) 00:11:23.706 fused_ordering(473) 00:11:23.706 fused_ordering(474) 00:11:23.706 fused_ordering(475) 00:11:23.706 fused_ordering(476) 00:11:23.706 fused_ordering(477) 00:11:23.706 fused_ordering(478) 00:11:23.706 fused_ordering(479) 00:11:23.706 fused_ordering(480) 00:11:23.706 fused_ordering(481) 00:11:23.706 fused_ordering(482) 00:11:23.706 fused_ordering(483) 00:11:23.706 fused_ordering(484) 00:11:23.706 fused_ordering(485) 00:11:23.706 fused_ordering(486) 00:11:23.706 fused_ordering(487) 00:11:23.706 fused_ordering(488) 00:11:23.706 fused_ordering(489) 00:11:23.706 fused_ordering(490) 00:11:23.706 fused_ordering(491) 00:11:23.706 fused_ordering(492) 00:11:23.706 fused_ordering(493) 00:11:23.706 fused_ordering(494) 00:11:23.706 fused_ordering(495) 00:11:23.706 fused_ordering(496) 00:11:23.706 fused_ordering(497) 00:11:23.706 fused_ordering(498) 00:11:23.706 fused_ordering(499) 00:11:23.706 fused_ordering(500) 00:11:23.706 fused_ordering(501) 00:11:23.706 fused_ordering(502) 00:11:23.706 fused_ordering(503) 00:11:23.706 fused_ordering(504) 00:11:23.706 fused_ordering(505) 00:11:23.706 fused_ordering(506) 00:11:23.706 fused_ordering(507) 00:11:23.706 fused_ordering(508) 00:11:23.706 fused_ordering(509) 00:11:23.706 fused_ordering(510) 00:11:23.706 fused_ordering(511) 00:11:23.706 fused_ordering(512) 00:11:23.706 fused_ordering(513) 00:11:23.706 fused_ordering(514) 00:11:23.706 fused_ordering(515) 00:11:23.706 fused_ordering(516) 00:11:23.706 fused_ordering(517) 00:11:23.706 fused_ordering(518) 00:11:23.706 fused_ordering(519) 00:11:23.706 fused_ordering(520) 00:11:23.706 fused_ordering(521) 00:11:23.706 fused_ordering(522) 00:11:23.706 fused_ordering(523) 00:11:23.706 fused_ordering(524) 00:11:23.706 fused_ordering(525) 00:11:23.706 fused_ordering(526) 00:11:23.706 fused_ordering(527) 00:11:23.706 fused_ordering(528) 00:11:23.706 fused_ordering(529) 00:11:23.706 fused_ordering(530) 00:11:23.706 fused_ordering(531) 00:11:23.706 fused_ordering(532) 00:11:23.706 fused_ordering(533) 00:11:23.706 fused_ordering(534) 00:11:23.706 fused_ordering(535) 00:11:23.706 fused_ordering(536) 00:11:23.706 fused_ordering(537) 00:11:23.706 fused_ordering(538) 00:11:23.706 fused_ordering(539) 00:11:23.706 fused_ordering(540) 00:11:23.706 fused_ordering(541) 00:11:23.706 fused_ordering(542) 00:11:23.706 fused_ordering(543) 00:11:23.706 fused_ordering(544) 00:11:23.706 fused_ordering(545) 00:11:23.706 fused_ordering(546) 00:11:23.706 fused_ordering(547) 00:11:23.706 fused_ordering(548) 00:11:23.706 fused_ordering(549) 00:11:23.706 fused_ordering(550) 00:11:23.706 fused_ordering(551) 00:11:23.706 fused_ordering(552) 00:11:23.706 fused_ordering(553) 00:11:23.706 fused_ordering(554) 00:11:23.706 fused_ordering(555) 00:11:23.706 fused_ordering(556) 00:11:23.706 fused_ordering(557) 00:11:23.706 fused_ordering(558) 00:11:23.706 fused_ordering(559) 00:11:23.706 fused_ordering(560) 00:11:23.706 fused_ordering(561) 00:11:23.706 fused_ordering(562) 00:11:23.706 fused_ordering(563) 00:11:23.706 fused_ordering(564) 00:11:23.706 fused_ordering(565) 00:11:23.706 fused_ordering(566) 00:11:23.706 fused_ordering(567) 00:11:23.706 fused_ordering(568) 00:11:23.706 fused_ordering(569) 00:11:23.706 fused_ordering(570) 00:11:23.706 fused_ordering(571) 00:11:23.706 fused_ordering(572) 00:11:23.706 fused_ordering(573) 00:11:23.706 fused_ordering(574) 00:11:23.706 fused_ordering(575) 00:11:23.706 fused_ordering(576) 00:11:23.706 fused_ordering(577) 00:11:23.706 fused_ordering(578) 00:11:23.706 fused_ordering(579) 00:11:23.706 fused_ordering(580) 00:11:23.706 fused_ordering(581) 00:11:23.706 fused_ordering(582) 00:11:23.706 fused_ordering(583) 00:11:23.706 fused_ordering(584) 00:11:23.706 fused_ordering(585) 00:11:23.706 fused_ordering(586) 00:11:23.706 fused_ordering(587) 00:11:23.706 fused_ordering(588) 00:11:23.706 fused_ordering(589) 00:11:23.706 fused_ordering(590) 00:11:23.706 fused_ordering(591) 00:11:23.706 fused_ordering(592) 00:11:23.706 fused_ordering(593) 00:11:23.707 fused_ordering(594) 00:11:23.707 fused_ordering(595) 00:11:23.707 fused_ordering(596) 00:11:23.707 fused_ordering(597) 00:11:23.707 fused_ordering(598) 00:11:23.707 fused_ordering(599) 00:11:23.707 fused_ordering(600) 00:11:23.707 fused_ordering(601) 00:11:23.707 fused_ordering(602) 00:11:23.707 fused_ordering(603) 00:11:23.707 fused_ordering(604) 00:11:23.707 fused_ordering(605) 00:11:23.707 fused_ordering(606) 00:11:23.707 fused_ordering(607) 00:11:23.707 fused_ordering(608) 00:11:23.707 fused_ordering(609) 00:11:23.707 fused_ordering(610) 00:11:23.707 fused_ordering(611) 00:11:23.707 fused_ordering(612) 00:11:23.707 fused_ordering(613) 00:11:23.707 fused_ordering(614) 00:11:23.707 fused_ordering(615) 00:11:24.286 fused_ordering(616) 00:11:24.286 fused_ordering(617) 00:11:24.286 fused_ordering(618) 00:11:24.286 fused_ordering(619) 00:11:24.286 fused_ordering(620) 00:11:24.286 fused_ordering(621) 00:11:24.286 fused_ordering(622) 00:11:24.286 fused_ordering(623) 00:11:24.286 fused_ordering(624) 00:11:24.286 fused_ordering(625) 00:11:24.286 fused_ordering(626) 00:11:24.286 fused_ordering(627) 00:11:24.286 fused_ordering(628) 00:11:24.286 fused_ordering(629) 00:11:24.286 fused_ordering(630) 00:11:24.286 fused_ordering(631) 00:11:24.286 fused_ordering(632) 00:11:24.286 fused_ordering(633) 00:11:24.286 fused_ordering(634) 00:11:24.286 fused_ordering(635) 00:11:24.286 fused_ordering(636) 00:11:24.286 fused_ordering(637) 00:11:24.286 fused_ordering(638) 00:11:24.286 fused_ordering(639) 00:11:24.286 fused_ordering(640) 00:11:24.286 fused_ordering(641) 00:11:24.286 fused_ordering(642) 00:11:24.286 fused_ordering(643) 00:11:24.286 fused_ordering(644) 00:11:24.286 fused_ordering(645) 00:11:24.286 fused_ordering(646) 00:11:24.286 fused_ordering(647) 00:11:24.286 fused_ordering(648) 00:11:24.286 fused_ordering(649) 00:11:24.286 fused_ordering(650) 00:11:24.286 fused_ordering(651) 00:11:24.286 fused_ordering(652) 00:11:24.286 fused_ordering(653) 00:11:24.286 fused_ordering(654) 00:11:24.286 fused_ordering(655) 00:11:24.287 fused_ordering(656) 00:11:24.287 fused_ordering(657) 00:11:24.287 fused_ordering(658) 00:11:24.287 fused_ordering(659) 00:11:24.287 fused_ordering(660) 00:11:24.287 fused_ordering(661) 00:11:24.287 fused_ordering(662) 00:11:24.287 fused_ordering(663) 00:11:24.287 fused_ordering(664) 00:11:24.287 fused_ordering(665) 00:11:24.287 fused_ordering(666) 00:11:24.287 fused_ordering(667) 00:11:24.287 fused_ordering(668) 00:11:24.287 fused_ordering(669) 00:11:24.287 fused_ordering(670) 00:11:24.287 fused_ordering(671) 00:11:24.287 fused_ordering(672) 00:11:24.287 fused_ordering(673) 00:11:24.287 fused_ordering(674) 00:11:24.287 fused_ordering(675) 00:11:24.287 fused_ordering(676) 00:11:24.287 fused_ordering(677) 00:11:24.287 fused_ordering(678) 00:11:24.287 fused_ordering(679) 00:11:24.287 fused_ordering(680) 00:11:24.287 fused_ordering(681) 00:11:24.287 fused_ordering(682) 00:11:24.287 fused_ordering(683) 00:11:24.287 fused_ordering(684) 00:11:24.287 fused_ordering(685) 00:11:24.287 fused_ordering(686) 00:11:24.287 fused_ordering(687) 00:11:24.287 fused_ordering(688) 00:11:24.287 fused_ordering(689) 00:11:24.287 fused_ordering(690) 00:11:24.287 fused_ordering(691) 00:11:24.287 fused_ordering(692) 00:11:24.287 fused_ordering(693) 00:11:24.287 fused_ordering(694) 00:11:24.287 fused_ordering(695) 00:11:24.287 fused_ordering(696) 00:11:24.287 fused_ordering(697) 00:11:24.287 fused_ordering(698) 00:11:24.287 fused_ordering(699) 00:11:24.287 fused_ordering(700) 00:11:24.287 fused_ordering(701) 00:11:24.287 fused_ordering(702) 00:11:24.287 fused_ordering(703) 00:11:24.287 fused_ordering(704) 00:11:24.287 fused_ordering(705) 00:11:24.287 fused_ordering(706) 00:11:24.287 fused_ordering(707) 00:11:24.287 fused_ordering(708) 00:11:24.287 fused_ordering(709) 00:11:24.287 fused_ordering(710) 00:11:24.287 fused_ordering(711) 00:11:24.287 fused_ordering(712) 00:11:24.287 fused_ordering(713) 00:11:24.287 fused_ordering(714) 00:11:24.287 fused_ordering(715) 00:11:24.287 fused_ordering(716) 00:11:24.287 fused_ordering(717) 00:11:24.287 fused_ordering(718) 00:11:24.287 fused_ordering(719) 00:11:24.287 fused_ordering(720) 00:11:24.287 fused_ordering(721) 00:11:24.287 fused_ordering(722) 00:11:24.287 fused_ordering(723) 00:11:24.287 fused_ordering(724) 00:11:24.287 fused_ordering(725) 00:11:24.287 fused_ordering(726) 00:11:24.287 fused_ordering(727) 00:11:24.287 fused_ordering(728) 00:11:24.287 fused_ordering(729) 00:11:24.287 fused_ordering(730) 00:11:24.287 fused_ordering(731) 00:11:24.287 fused_ordering(732) 00:11:24.287 fused_ordering(733) 00:11:24.287 fused_ordering(734) 00:11:24.287 fused_ordering(735) 00:11:24.287 fused_ordering(736) 00:11:24.287 fused_ordering(737) 00:11:24.287 fused_ordering(738) 00:11:24.287 fused_ordering(739) 00:11:24.287 fused_ordering(740) 00:11:24.287 fused_ordering(741) 00:11:24.287 fused_ordering(742) 00:11:24.287 fused_ordering(743) 00:11:24.287 fused_ordering(744) 00:11:24.287 fused_ordering(745) 00:11:24.287 fused_ordering(746) 00:11:24.287 fused_ordering(747) 00:11:24.287 fused_ordering(748) 00:11:24.287 fused_ordering(749) 00:11:24.287 fused_ordering(750) 00:11:24.287 fused_ordering(751) 00:11:24.287 fused_ordering(752) 00:11:24.287 fused_ordering(753) 00:11:24.287 fused_ordering(754) 00:11:24.287 fused_ordering(755) 00:11:24.287 fused_ordering(756) 00:11:24.287 fused_ordering(757) 00:11:24.287 fused_ordering(758) 00:11:24.287 fused_ordering(759) 00:11:24.287 fused_ordering(760) 00:11:24.287 fused_ordering(761) 00:11:24.287 fused_ordering(762) 00:11:24.287 fused_ordering(763) 00:11:24.287 fused_ordering(764) 00:11:24.287 fused_ordering(765) 00:11:24.287 fused_ordering(766) 00:11:24.287 fused_ordering(767) 00:11:24.287 fused_ordering(768) 00:11:24.287 fused_ordering(769) 00:11:24.287 fused_ordering(770) 00:11:24.287 fused_ordering(771) 00:11:24.287 fused_ordering(772) 00:11:24.287 fused_ordering(773) 00:11:24.287 fused_ordering(774) 00:11:24.287 fused_ordering(775) 00:11:24.287 fused_ordering(776) 00:11:24.287 fused_ordering(777) 00:11:24.287 fused_ordering(778) 00:11:24.287 fused_ordering(779) 00:11:24.287 fused_ordering(780) 00:11:24.287 fused_ordering(781) 00:11:24.287 fused_ordering(782) 00:11:24.287 fused_ordering(783) 00:11:24.287 fused_ordering(784) 00:11:24.287 fused_ordering(785) 00:11:24.287 fused_ordering(786) 00:11:24.287 fused_ordering(787) 00:11:24.287 fused_ordering(788) 00:11:24.287 fused_ordering(789) 00:11:24.287 fused_ordering(790) 00:11:24.287 fused_ordering(791) 00:11:24.287 fused_ordering(792) 00:11:24.287 fused_ordering(793) 00:11:24.287 fused_ordering(794) 00:11:24.287 fused_ordering(795) 00:11:24.287 fused_ordering(796) 00:11:24.287 fused_ordering(797) 00:11:24.287 fused_ordering(798) 00:11:24.287 fused_ordering(799) 00:11:24.287 fused_ordering(800) 00:11:24.287 fused_ordering(801) 00:11:24.287 fused_ordering(802) 00:11:24.287 fused_ordering(803) 00:11:24.287 fused_ordering(804) 00:11:24.287 fused_ordering(805) 00:11:24.287 fused_ordering(806) 00:11:24.287 fused_ordering(807) 00:11:24.287 fused_ordering(808) 00:11:24.287 fused_ordering(809) 00:11:24.287 fused_ordering(810) 00:11:24.287 fused_ordering(811) 00:11:24.287 fused_ordering(812) 00:11:24.287 fused_ordering(813) 00:11:24.287 fused_ordering(814) 00:11:24.287 fused_ordering(815) 00:11:24.287 fused_ordering(816) 00:11:24.287 fused_ordering(817) 00:11:24.287 fused_ordering(818) 00:11:24.287 fused_ordering(819) 00:11:24.287 fused_ordering(820) 00:11:24.857 fused_ordering(821) 00:11:24.857 fused_ordering(822) 00:11:24.857 fused_ordering(823) 00:11:24.857 fused_ordering(824) 00:11:24.857 fused_ordering(825) 00:11:24.857 fused_ordering(826) 00:11:24.857 fused_ordering(827) 00:11:24.857 fused_ordering(828) 00:11:24.857 fused_ordering(829) 00:11:24.857 fused_ordering(830) 00:11:24.857 fused_ordering(831) 00:11:24.857 fused_ordering(832) 00:11:24.857 fused_ordering(833) 00:11:24.857 fused_ordering(834) 00:11:24.857 fused_ordering(835) 00:11:24.857 fused_ordering(836) 00:11:24.857 fused_ordering(837) 00:11:24.857 fused_ordering(838) 00:11:24.857 fused_ordering(839) 00:11:24.857 fused_ordering(840) 00:11:24.857 fused_ordering(841) 00:11:24.857 fused_ordering(842) 00:11:24.857 fused_ordering(843) 00:11:24.857 fused_ordering(844) 00:11:24.857 fused_ordering(845) 00:11:24.857 fused_ordering(846) 00:11:24.857 fused_ordering(847) 00:11:24.857 fused_ordering(848) 00:11:24.857 fused_ordering(849) 00:11:24.857 fused_ordering(850) 00:11:24.857 fused_ordering(851) 00:11:24.857 fused_ordering(852) 00:11:24.857 fused_ordering(853) 00:11:24.857 fused_ordering(854) 00:11:24.857 fused_ordering(855) 00:11:24.857 fused_ordering(856) 00:11:24.857 fused_ordering(857) 00:11:24.857 fused_ordering(858) 00:11:24.857 fused_ordering(859) 00:11:24.857 fused_ordering(860) 00:11:24.857 fused_ordering(861) 00:11:24.857 fused_ordering(862) 00:11:24.857 fused_ordering(863) 00:11:24.857 fused_ordering(864) 00:11:24.857 fused_ordering(865) 00:11:24.857 fused_ordering(866) 00:11:24.857 fused_ordering(867) 00:11:24.857 fused_ordering(868) 00:11:24.857 fused_ordering(869) 00:11:24.857 fused_ordering(870) 00:11:24.857 fused_ordering(871) 00:11:24.857 fused_ordering(872) 00:11:24.857 fused_ordering(873) 00:11:24.857 fused_ordering(874) 00:11:24.857 fused_ordering(875) 00:11:24.857 fused_ordering(876) 00:11:24.857 fused_ordering(877) 00:11:24.857 fused_ordering(878) 00:11:24.857 fused_ordering(879) 00:11:24.857 fused_ordering(880) 00:11:24.857 fused_ordering(881) 00:11:24.857 fused_ordering(882) 00:11:24.857 fused_ordering(883) 00:11:24.857 fused_ordering(884) 00:11:24.857 fused_ordering(885) 00:11:24.857 fused_ordering(886) 00:11:24.857 fused_ordering(887) 00:11:24.857 fused_ordering(888) 00:11:24.857 fused_ordering(889) 00:11:24.857 fused_ordering(890) 00:11:24.857 fused_ordering(891) 00:11:24.857 fused_ordering(892) 00:11:24.857 fused_ordering(893) 00:11:24.857 fused_ordering(894) 00:11:24.857 fused_ordering(895) 00:11:24.857 fused_ordering(896) 00:11:24.857 fused_ordering(897) 00:11:24.857 fused_ordering(898) 00:11:24.857 fused_ordering(899) 00:11:24.857 fused_ordering(900) 00:11:24.857 fused_ordering(901) 00:11:24.857 fused_ordering(902) 00:11:24.857 fused_ordering(903) 00:11:24.857 fused_ordering(904) 00:11:24.857 fused_ordering(905) 00:11:24.857 fused_ordering(906) 00:11:24.857 fused_ordering(907) 00:11:24.857 fused_ordering(908) 00:11:24.857 fused_ordering(909) 00:11:24.857 fused_ordering(910) 00:11:24.857 fused_ordering(911) 00:11:24.857 fused_ordering(912) 00:11:24.857 fused_ordering(913) 00:11:24.857 fused_ordering(914) 00:11:24.857 fused_ordering(915) 00:11:24.857 fused_ordering(916) 00:11:24.857 fused_ordering(917) 00:11:24.857 fused_ordering(918) 00:11:24.857 fused_ordering(919) 00:11:24.857 fused_ordering(920) 00:11:24.857 fused_ordering(921) 00:11:24.857 fused_ordering(922) 00:11:24.857 fused_ordering(923) 00:11:24.857 fused_ordering(924) 00:11:24.857 fused_ordering(925) 00:11:24.857 fused_ordering(926) 00:11:24.857 fused_ordering(927) 00:11:24.857 fused_ordering(928) 00:11:24.857 fused_ordering(929) 00:11:24.857 fused_ordering(930) 00:11:24.857 fused_ordering(931) 00:11:24.857 fused_ordering(932) 00:11:24.857 fused_ordering(933) 00:11:24.857 fused_ordering(934) 00:11:24.857 fused_ordering(935) 00:11:24.857 fused_ordering(936) 00:11:24.857 fused_ordering(937) 00:11:24.857 fused_ordering(938) 00:11:24.857 fused_ordering(939) 00:11:24.857 fused_ordering(940) 00:11:24.857 fused_ordering(941) 00:11:24.857 fused_ordering(942) 00:11:24.857 fused_ordering(943) 00:11:24.857 fused_ordering(944) 00:11:24.857 fused_ordering(945) 00:11:24.857 fused_ordering(946) 00:11:24.857 fused_ordering(947) 00:11:24.857 fused_ordering(948) 00:11:24.857 fused_ordering(949) 00:11:24.857 fused_ordering(950) 00:11:24.857 fused_ordering(951) 00:11:24.857 fused_ordering(952) 00:11:24.857 fused_ordering(953) 00:11:24.857 fused_ordering(954) 00:11:24.857 fused_ordering(955) 00:11:24.857 fused_ordering(956) 00:11:24.857 fused_ordering(957) 00:11:24.857 fused_ordering(958) 00:11:24.857 fused_ordering(959) 00:11:24.857 fused_ordering(960) 00:11:24.857 fused_ordering(961) 00:11:24.857 fused_ordering(962) 00:11:24.857 fused_ordering(963) 00:11:24.857 fused_ordering(964) 00:11:24.857 fused_ordering(965) 00:11:24.857 fused_ordering(966) 00:11:24.857 fused_ordering(967) 00:11:24.857 fused_ordering(968) 00:11:24.857 fused_ordering(969) 00:11:24.857 fused_ordering(970) 00:11:24.857 fused_ordering(971) 00:11:24.857 fused_ordering(972) 00:11:24.857 fused_ordering(973) 00:11:24.857 fused_ordering(974) 00:11:24.857 fused_ordering(975) 00:11:24.857 fused_ordering(976) 00:11:24.857 fused_ordering(977) 00:11:24.857 fused_ordering(978) 00:11:24.857 fused_ordering(979) 00:11:24.857 fused_ordering(980) 00:11:24.857 fused_ordering(981) 00:11:24.857 fused_ordering(982) 00:11:24.857 fused_ordering(983) 00:11:24.857 fused_ordering(984) 00:11:24.857 fused_ordering(985) 00:11:24.857 fused_ordering(986) 00:11:24.857 fused_ordering(987) 00:11:24.857 fused_ordering(988) 00:11:24.857 fused_ordering(989) 00:11:24.857 fused_ordering(990) 00:11:24.857 fused_ordering(991) 00:11:24.857 fused_ordering(992) 00:11:24.857 fused_ordering(993) 00:11:24.857 fused_ordering(994) 00:11:24.857 fused_ordering(995) 00:11:24.857 fused_ordering(996) 00:11:24.857 fused_ordering(997) 00:11:24.857 fused_ordering(998) 00:11:24.857 fused_ordering(999) 00:11:24.857 fused_ordering(1000) 00:11:24.857 fused_ordering(1001) 00:11:24.857 fused_ordering(1002) 00:11:24.857 fused_ordering(1003) 00:11:24.857 fused_ordering(1004) 00:11:24.857 fused_ordering(1005) 00:11:24.857 fused_ordering(1006) 00:11:24.857 fused_ordering(1007) 00:11:24.857 fused_ordering(1008) 00:11:24.857 fused_ordering(1009) 00:11:24.857 fused_ordering(1010) 00:11:24.857 fused_ordering(1011) 00:11:24.857 fused_ordering(1012) 00:11:24.857 fused_ordering(1013) 00:11:24.857 fused_ordering(1014) 00:11:24.857 fused_ordering(1015) 00:11:24.857 fused_ordering(1016) 00:11:24.857 fused_ordering(1017) 00:11:24.857 fused_ordering(1018) 00:11:24.857 fused_ordering(1019) 00:11:24.857 fused_ordering(1020) 00:11:24.857 fused_ordering(1021) 00:11:24.857 fused_ordering(1022) 00:11:24.857 fused_ordering(1023) 00:11:24.857 13:41:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:24.857 13:41:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:24.857 13:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:24.857 13:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:24.857 13:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:24.857 13:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:24.857 13:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:24.857 13:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:24.857 rmmod nvme_tcp 00:11:24.857 rmmod nvme_fabrics 00:11:24.857 rmmod nvme_keyring 00:11:24.857 13:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:24.857 13:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:24.857 13:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:24.857 13:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 972794 ']' 00:11:24.857 13:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 972794 00:11:24.857 13:41:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 972794 ']' 00:11:24.857 13:41:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 972794 00:11:24.857 13:41:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:25.118 13:41:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:25.118 13:41:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 972794 00:11:25.118 13:41:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:25.118 13:41:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:25.118 13:41:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 972794' 00:11:25.118 killing process with pid 972794 00:11:25.118 13:41:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 972794 00:11:25.118 13:41:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 972794 00:11:25.118 13:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:25.118 13:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:25.118 13:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:25.118 13:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:25.118 13:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:25.118 13:41:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.118 13:41:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:25.118 13:41:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.660 13:41:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:27.660 00:11:27.660 real 0m13.280s 00:11:27.660 user 0m7.341s 00:11:27.660 sys 0m7.082s 00:11:27.660 13:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:27.660 13:41:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:27.660 ************************************ 00:11:27.660 END TEST nvmf_fused_ordering 00:11:27.660 ************************************ 00:11:27.660 13:41:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:27.660 13:41:53 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:27.660 13:41:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:27.660 13:41:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.660 13:41:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:27.660 ************************************ 00:11:27.660 START TEST nvmf_delete_subsystem 00:11:27.660 ************************************ 00:11:27.660 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:27.660 * Looking for test storage... 00:11:27.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.660 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.660 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:27.660 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.660 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.660 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:27.661 13:41:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.287 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:34.288 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:34.288 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:34.288 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:34.288 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:34.288 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:34.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:11:34.549 00:11:34.549 --- 10.0.0.2 ping statistics --- 00:11:34.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.549 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:11:34.549 00:11:34.549 --- 10.0.0.1 ping statistics --- 00:11:34.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.549 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=977690 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 977690 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 977690 ']' 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:34.549 13:42:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.549 [2024-07-15 13:42:00.989021] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:34.549 [2024-07-15 13:42:00.989086] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.549 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.549 [2024-07-15 13:42:01.059844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:34.810 [2024-07-15 13:42:01.134790] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.810 [2024-07-15 13:42:01.134829] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.810 [2024-07-15 13:42:01.134839] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.810 [2024-07-15 13:42:01.134845] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.810 [2024-07-15 13:42:01.134851] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.810 [2024-07-15 13:42:01.135427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.810 [2024-07-15 13:42:01.135504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.379 [2024-07-15 13:42:01.806773] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.379 [2024-07-15 13:42:01.822906] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.379 NULL1 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.379 Delay0 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=977934 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:35.379 13:42:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:35.379 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.639 [2024-07-15 13:42:01.907562] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:37.551 13:42:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.551 13:42:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.551 13:42:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 [2024-07-15 13:42:03.951255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f85c0 is same with the state(5) to be set 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 starting I/O failed: -6 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 [2024-07-15 13:42:03.955972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f58b400d430 is same with the state(5) to be set 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.551 Write completed with error (sct=0, sc=8) 00:11:37.551 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Write completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Write completed with error (sct=0, sc=8) 00:11:37.552 Write completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:37.552 Write completed with error (sct=0, sc=8) 00:11:37.552 Write completed with error (sct=0, sc=8) 00:11:37.552 Read completed with error (sct=0, sc=8) 00:11:38.493 [2024-07-15 13:42:04.924257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f9ac0 is same with the state(5) to be set 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 [2024-07-15 13:42:04.954684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f83e0 is same with the state(5) to be set 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 [2024-07-15 13:42:04.955012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f87a0 is same with the state(5) to be set 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 [2024-07-15 13:42:04.957662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f58b400d740 is same with the state(5) to be set 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Write completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 Read completed with error (sct=0, sc=8) 00:11:38.493 [2024-07-15 13:42:04.958326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f58b400cfe0 is same with the state(5) to be set 00:11:38.493 Initializing NVMe Controllers 00:11:38.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:38.493 Controller IO queue size 128, less than required. 00:11:38.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:38.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:38.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:38.493 Initialization complete. Launching workers. 00:11:38.493 ======================================================== 00:11:38.493 Latency(us) 00:11:38.493 Device Information : IOPS MiB/s Average min max 00:11:38.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.36 0.08 891854.08 217.04 1006321.22 00:11:38.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.42 0.08 949841.08 293.82 2002279.48 00:11:38.493 ======================================================== 00:11:38.493 Total : 324.78 0.16 919424.59 217.04 2002279.48 00:11:38.493 00:11:38.493 [2024-07-15 13:42:04.958835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f9ac0 (9): Bad file descriptor 00:11:38.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:38.493 13:42:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.493 13:42:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:38.493 13:42:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 977934 00:11:38.493 13:42:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 977934 00:11:39.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (977934) - No such process 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 977934 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 977934 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 977934 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:39.064 [2024-07-15 13:42:05.491618] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=978626 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 978626 00:11:39.064 13:42:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:39.064 EAL: No free 2048 kB hugepages reported on node 1 00:11:39.064 [2024-07-15 13:42:05.538102] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:39.636 13:42:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:39.636 13:42:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 978626 00:11:39.636 13:42:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:40.207 13:42:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:40.207 13:42:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 978626 00:11:40.207 13:42:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:40.777 13:42:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:40.777 13:42:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 978626 00:11:40.777 13:42:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:41.038 13:42:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:41.038 13:42:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 978626 00:11:41.038 13:42:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:41.609 13:42:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:41.609 13:42:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 978626 00:11:41.609 13:42:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:42.180 13:42:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:42.180 13:42:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 978626 00:11:42.180 13:42:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:42.180 Initializing NVMe Controllers 00:11:42.180 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:42.180 Controller IO queue size 128, less than required. 00:11:42.180 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:42.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:42.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:42.180 Initialization complete. Launching workers. 00:11:42.180 ======================================================== 00:11:42.180 Latency(us) 00:11:42.180 Device Information : IOPS MiB/s Average min max 00:11:42.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002244.39 1000340.74 1008462.95 00:11:42.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003071.16 1000330.00 1009245.90 00:11:42.180 ======================================================== 00:11:42.180 Total : 256.00 0.12 1002657.77 1000330.00 1009245.90 00:11:42.180 00:11:42.751 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:42.751 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 978626 00:11:42.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (978626) - No such process 00:11:42.751 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 978626 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:42.752 rmmod nvme_tcp 00:11:42.752 rmmod nvme_fabrics 00:11:42.752 rmmod nvme_keyring 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 977690 ']' 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 977690 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 977690 ']' 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 977690 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 977690 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 977690' 00:11:42.752 killing process with pid 977690 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 977690 00:11:42.752 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 977690 00:11:43.012 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:43.012 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:43.012 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:43.012 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:43.012 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:43.012 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.012 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:43.012 13:42:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.963 13:42:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:44.963 00:11:44.963 real 0m17.617s 00:11:44.963 user 0m30.297s 00:11:44.963 sys 0m6.056s 00:11:44.963 13:42:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:44.963 13:42:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:44.963 ************************************ 00:11:44.963 END TEST nvmf_delete_subsystem 00:11:44.963 ************************************ 00:11:44.963 13:42:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:44.963 13:42:11 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:44.963 13:42:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:44.963 13:42:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.963 13:42:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:44.963 ************************************ 00:11:44.963 START TEST nvmf_ns_masking 00:11:44.963 ************************************ 00:11:44.963 13:42:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:45.223 * Looking for test storage... 00:11:45.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:45.223 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=fc10a267-2e78-49a7-b39b-b817a95e4c20 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=118950c6-c3bc-483a-813b-7dba92bd0f8a 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=5eb14eef-9311-4b8d-b1af-7b03055e7131 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:45.224 13:42:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:51.803 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:51.803 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:51.803 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:51.803 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.803 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.064 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.064 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.064 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:52.065 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.065 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.065 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.065 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:52.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:11:52.065 00:11:52.065 --- 10.0.0.2 ping statistics --- 00:11:52.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.065 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:11:52.065 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:11:52.065 00:11:52.065 --- 10.0.0.1 ping statistics --- 00:11:52.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.065 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:11:52.065 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.065 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:52.065 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:52.065 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.065 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:52.065 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:52.065 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.065 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:52.065 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:52.325 13:42:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:52.325 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:52.325 13:42:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:52.325 13:42:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:52.325 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:52.325 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=984094 00:11:52.325 13:42:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 984094 00:11:52.325 13:42:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 984094 ']' 00:11:52.325 13:42:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.325 13:42:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.325 13:42:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.325 13:42:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.325 13:42:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:52.325 [2024-07-15 13:42:18.650522] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:52.325 [2024-07-15 13:42:18.650569] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.325 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.325 [2024-07-15 13:42:18.708174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.325 [2024-07-15 13:42:18.772791] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.325 [2024-07-15 13:42:18.772825] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.325 [2024-07-15 13:42:18.772832] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.325 [2024-07-15 13:42:18.772838] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.325 [2024-07-15 13:42:18.772844] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.325 [2024-07-15 13:42:18.772866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.263 13:42:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.263 13:42:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:53.263 13:42:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:53.263 13:42:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:53.263 13:42:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:53.263 13:42:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.263 13:42:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:53.263 [2024-07-15 13:42:19.611626] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.263 13:42:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:53.263 13:42:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:53.263 13:42:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:53.523 Malloc1 00:11:53.523 13:42:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:53.523 Malloc2 00:11:53.523 13:42:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:53.784 13:42:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:54.043 13:42:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.043 [2024-07-15 13:42:20.485999] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.043 13:42:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:54.043 13:42:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5eb14eef-9311-4b8d-b1af-7b03055e7131 -a 10.0.0.2 -s 4420 -i 4 00:11:54.303 13:42:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.303 13:42:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:54.303 13:42:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.303 13:42:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:54.303 13:42:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:56.842 13:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:56.842 13:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:56.842 13:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.842 13:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:56.842 13:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.842 13:42:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:56.842 13:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:56.842 13:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:56.842 13:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:56.842 13:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:56.842 13:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:56.842 13:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.842 13:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:56.842 [ 0]:0x1 00:11:56.842 13:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:56.842 13:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.842 13:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8a509fbc789b4c50bcc1f6c95f1bdd6e 00:11:56.842 13:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8a509fbc789b4c50bcc1f6c95f1bdd6e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.842 13:42:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:56.842 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:56.842 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.842 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:56.842 [ 0]:0x1 00:11:56.842 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:56.842 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.842 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8a509fbc789b4c50bcc1f6c95f1bdd6e 00:11:56.842 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8a509fbc789b4c50bcc1f6c95f1bdd6e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.842 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:56.842 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.842 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:56.842 [ 1]:0x2 00:11:56.842 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:56.842 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.842 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95592c3b0b6449ebae947a243c12c249 00:11:56.842 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95592c3b0b6449ebae947a243c12c249 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.842 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:56.842 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.842 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.102 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:57.362 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:57.362 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5eb14eef-9311-4b8d-b1af-7b03055e7131 -a 10.0.0.2 -s 4420 -i 4 00:11:57.362 13:42:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:57.362 13:42:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:57.362 13:42:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.362 13:42:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:57.362 13:42:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:57.362 13:42:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:59.273 13:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:59.273 13:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:59.273 13:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.273 13:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:59.273 13:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.273 13:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:59.273 13:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:59.273 13:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:59.533 [ 0]:0x2 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:59.533 13:42:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:59.533 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95592c3b0b6449ebae947a243c12c249 00:11:59.533 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95592c3b0b6449ebae947a243c12c249 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:59.533 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:59.793 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:59.793 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:59.793 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:59.793 [ 0]:0x1 00:11:59.793 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:59.793 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:59.793 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8a509fbc789b4c50bcc1f6c95f1bdd6e 00:11:59.793 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8a509fbc789b4c50bcc1f6c95f1bdd6e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:59.793 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:59.793 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:59.793 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:59.793 [ 1]:0x2 00:11:59.793 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:59.793 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:59.793 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95592c3b0b6449ebae947a243c12c249 00:11:59.793 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95592c3b0b6449ebae947a243c12c249 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:59.793 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:00.052 [ 0]:0x2 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:00.052 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.313 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95592c3b0b6449ebae947a243c12c249 00:12:00.313 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95592c3b0b6449ebae947a243c12c249 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.313 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:00.313 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.313 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:00.573 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:00.573 13:42:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5eb14eef-9311-4b8d-b1af-7b03055e7131 -a 10.0.0.2 -s 4420 -i 4 00:12:00.573 13:42:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:00.573 13:42:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:00.573 13:42:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.573 13:42:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:00.573 13:42:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:00.573 13:42:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:03.115 [ 0]:0x1 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8a509fbc789b4c50bcc1f6c95f1bdd6e 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8a509fbc789b4c50bcc1f6c95f1bdd6e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:03.115 [ 1]:0x2 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95592c3b0b6449ebae947a243c12c249 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95592c3b0b6449ebae947a243c12c249 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.115 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:03.115 [ 0]:0x2 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95592c3b0b6449ebae947a243c12c249 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95592c3b0b6449ebae947a243c12c249 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:03.376 [2024-07-15 13:42:29.832682] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:03.376 request: 00:12:03.376 { 00:12:03.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:03.376 "nsid": 2, 00:12:03.376 "host": "nqn.2016-06.io.spdk:host1", 00:12:03.376 "method": "nvmf_ns_remove_host", 00:12:03.376 "req_id": 1 00:12:03.376 } 00:12:03.376 Got JSON-RPC error response 00:12:03.376 response: 00:12:03.376 { 00:12:03.376 "code": -32602, 00:12:03.376 "message": "Invalid parameters" 00:12:03.376 } 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:03.376 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.636 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:03.637 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.637 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:03.637 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:03.637 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:03.637 13:42:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:03.637 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:03.637 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.637 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:03.637 [ 0]:0x2 00:12:03.637 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:03.637 13:42:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.637 13:42:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95592c3b0b6449ebae947a243c12c249 00:12:03.637 13:42:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95592c3b0b6449ebae947a243c12c249 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.637 13:42:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:03.637 13:42:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.637 13:42:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=986448 00:12:03.637 13:42:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:03.637 13:42:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.637 13:42:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 986448 /var/tmp/host.sock 00:12:03.637 13:42:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 986448 ']' 00:12:03.637 13:42:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:03.637 13:42:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:03.637 13:42:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:03.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:03.897 13:42:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:03.897 13:42:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:03.897 [2024-07-15 13:42:30.210656] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:03.897 [2024-07-15 13:42:30.210709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid986448 ] 00:12:03.897 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.897 [2024-07-15 13:42:30.288086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.897 [2024-07-15 13:42:30.352255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.474 13:42:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:04.474 13:42:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:04.474 13:42:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.774 13:42:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:04.774 13:42:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid fc10a267-2e78-49a7-b39b-b817a95e4c20 00:12:04.774 13:42:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:04.774 13:42:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FC10A2672E7849A7B39BB817A95E4C20 -i 00:12:05.035 13:42:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 118950c6-c3bc-483a-813b-7dba92bd0f8a 00:12:05.035 13:42:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:05.035 13:42:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 118950C6C3BC483A813B7DBA92BD0F8A -i 00:12:05.295 13:42:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:05.295 13:42:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:05.556 13:42:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:05.556 13:42:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:05.816 nvme0n1 00:12:05.816 13:42:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:05.816 13:42:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:06.077 nvme1n2 00:12:06.077 13:42:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:06.077 13:42:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:06.077 13:42:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:06.077 13:42:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:06.077 13:42:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:06.077 13:42:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:06.077 13:42:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:06.077 13:42:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:06.077 13:42:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:06.336 13:42:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ fc10a267-2e78-49a7-b39b-b817a95e4c20 == \f\c\1\0\a\2\6\7\-\2\e\7\8\-\4\9\a\7\-\b\3\9\b\-\b\8\1\7\a\9\5\e\4\c\2\0 ]] 00:12:06.336 13:42:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:06.336 13:42:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:06.336 13:42:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:06.595 13:42:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 118950c6-c3bc-483a-813b-7dba92bd0f8a == \1\1\8\9\5\0\c\6\-\c\3\b\c\-\4\8\3\a\-\8\1\3\b\-\7\d\b\a\9\2\b\d\0\f\8\a ]] 00:12:06.595 13:42:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 986448 00:12:06.595 13:42:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 986448 ']' 00:12:06.595 13:42:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 986448 00:12:06.595 13:42:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:06.595 13:42:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:06.595 13:42:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 986448 00:12:06.595 13:42:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:06.595 13:42:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:06.595 13:42:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 986448' 00:12:06.595 killing process with pid 986448 00:12:06.595 13:42:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 986448 00:12:06.595 13:42:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 986448 00:12:06.864 13:42:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.864 13:42:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:06.864 13:42:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:06.864 13:42:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:06.864 13:42:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:06.864 13:42:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:06.864 13:42:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:06.864 13:42:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:06.864 13:42:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:06.864 rmmod nvme_tcp 00:12:06.864 rmmod nvme_fabrics 00:12:07.125 rmmod nvme_keyring 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 984094 ']' 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 984094 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 984094 ']' 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 984094 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 984094 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 984094' 00:12:07.125 killing process with pid 984094 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 984094 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 984094 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.125 13:42:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.669 13:42:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:09.669 00:12:09.669 real 0m24.266s 00:12:09.669 user 0m24.290s 00:12:09.669 sys 0m7.157s 00:12:09.669 13:42:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:09.669 13:42:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:09.669 ************************************ 00:12:09.669 END TEST nvmf_ns_masking 00:12:09.669 ************************************ 00:12:09.669 13:42:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:09.669 13:42:35 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:09.669 13:42:35 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:09.669 13:42:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:09.669 13:42:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.669 13:42:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:09.669 ************************************ 00:12:09.669 START TEST nvmf_nvme_cli 00:12:09.669 ************************************ 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:09.669 * Looking for test storage... 00:12:09.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:09.669 13:42:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:16.272 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:16.272 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:16.272 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:16.272 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:16.272 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:16.533 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:16.533 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:16.533 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:16.533 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:16.533 13:42:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:16.533 13:42:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:16.533 13:42:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:16.799 13:42:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:16.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:12:16.799 00:12:16.799 --- 10.0.0.2 ping statistics --- 00:12:16.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.799 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:12:16.799 13:42:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:16.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:12:16.799 00:12:16.799 --- 10.0.0.1 ping statistics --- 00:12:16.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.799 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:12:16.799 13:42:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.799 13:42:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:16.799 13:42:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:16.799 13:42:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.799 13:42:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:16.799 13:42:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:16.799 13:42:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.799 13:42:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:16.799 13:42:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:16.799 13:42:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:16.800 13:42:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:16.800 13:42:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:16.800 13:42:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:16.800 13:42:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=991301 00:12:16.800 13:42:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 991301 00:12:16.800 13:42:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.800 13:42:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 991301 ']' 00:12:16.800 13:42:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.800 13:42:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:16.800 13:42:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.800 13:42:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:16.800 13:42:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:16.800 [2024-07-15 13:42:43.187604] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:16.800 [2024-07-15 13:42:43.187665] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.800 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.800 [2024-07-15 13:42:43.256465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.063 [2024-07-15 13:42:43.332853] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.063 [2024-07-15 13:42:43.332890] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.063 [2024-07-15 13:42:43.332897] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.063 [2024-07-15 13:42:43.332904] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.063 [2024-07-15 13:42:43.332910] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.063 [2024-07-15 13:42:43.333055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.063 [2024-07-15 13:42:43.333166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.063 [2024-07-15 13:42:43.333265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.063 [2024-07-15 13:42:43.333266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.635 13:42:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:17.635 13:42:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:17.635 13:42:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:17.635 13:42:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:17.635 13:42:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.635 [2024-07-15 13:42:44.012740] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.635 Malloc0 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.635 Malloc1 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.635 [2024-07-15 13:42:44.098486] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.635 13:42:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:17.896 00:12:17.896 Discovery Log Number of Records 2, Generation counter 2 00:12:17.896 =====Discovery Log Entry 0====== 00:12:17.896 trtype: tcp 00:12:17.896 adrfam: ipv4 00:12:17.896 subtype: current discovery subsystem 00:12:17.896 treq: not required 00:12:17.896 portid: 0 00:12:17.896 trsvcid: 4420 00:12:17.896 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:17.896 traddr: 10.0.0.2 00:12:17.896 eflags: explicit discovery connections, duplicate discovery information 00:12:17.896 sectype: none 00:12:17.896 =====Discovery Log Entry 1====== 00:12:17.896 trtype: tcp 00:12:17.896 adrfam: ipv4 00:12:17.896 subtype: nvme subsystem 00:12:17.896 treq: not required 00:12:17.896 portid: 0 00:12:17.896 trsvcid: 4420 00:12:17.896 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:17.896 traddr: 10.0.0.2 00:12:17.896 eflags: none 00:12:17.896 sectype: none 00:12:17.896 13:42:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:17.896 13:42:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:17.896 13:42:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:17.896 13:42:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:17.896 13:42:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:17.896 13:42:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:17.896 13:42:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:17.896 13:42:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:17.896 13:42:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:17.896 13:42:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:17.896 13:42:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.280 13:42:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:19.280 13:42:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:19.280 13:42:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.280 13:42:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:19.280 13:42:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:19.280 13:42:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:21.193 13:42:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:21.193 13:42:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:21.193 13:42:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:21.452 /dev/nvme0n1 ]] 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.452 13:42:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:21.712 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:21.712 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.712 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:21.712 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.712 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:21.712 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:21.712 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.712 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:21.712 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:21.712 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:21.712 13:42:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:21.712 13:42:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:21.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:21.973 rmmod nvme_tcp 00:12:21.973 rmmod nvme_fabrics 00:12:21.973 rmmod nvme_keyring 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 991301 ']' 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 991301 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 991301 ']' 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 991301 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 991301 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 991301' 00:12:21.973 killing process with pid 991301 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 991301 00:12:21.973 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 991301 00:12:22.234 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:22.234 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:22.234 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:22.234 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:22.234 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:22.234 13:42:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.234 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.234 13:42:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.779 13:42:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:24.779 00:12:24.779 real 0m14.906s 00:12:24.779 user 0m23.223s 00:12:24.779 sys 0m5.901s 00:12:24.779 13:42:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:24.779 13:42:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:24.779 ************************************ 00:12:24.779 END TEST nvmf_nvme_cli 00:12:24.779 ************************************ 00:12:24.779 13:42:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:24.779 13:42:50 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:24.779 13:42:50 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:24.779 13:42:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:24.779 13:42:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:24.779 13:42:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:24.779 ************************************ 00:12:24.779 START TEST nvmf_vfio_user 00:12:24.779 ************************************ 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:24.779 * Looking for test storage... 00:12:24.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.779 13:42:50 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=993000 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 993000' 00:12:24.780 Process pid: 993000 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 993000 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 993000 ']' 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:24.780 13:42:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:24.780 [2024-07-15 13:42:50.983565] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:24.780 [2024-07-15 13:42:50.983646] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.780 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.780 [2024-07-15 13:42:51.050155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.780 [2024-07-15 13:42:51.126128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.780 [2024-07-15 13:42:51.126168] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.780 [2024-07-15 13:42:51.126176] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.780 [2024-07-15 13:42:51.126182] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.780 [2024-07-15 13:42:51.126187] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.780 [2024-07-15 13:42:51.126256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.780 [2024-07-15 13:42:51.126375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.780 [2024-07-15 13:42:51.126532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.780 [2024-07-15 13:42:51.126533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.413 13:42:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:25.413 13:42:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:25.413 13:42:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:26.354 13:42:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:26.614 13:42:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:26.614 13:42:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:26.614 13:42:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:26.614 13:42:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:26.614 13:42:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:26.614 Malloc1 00:12:26.614 13:42:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:26.878 13:42:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:27.144 13:42:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:27.144 13:42:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:27.144 13:42:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:27.144 13:42:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:27.404 Malloc2 00:12:27.404 13:42:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:27.666 13:42:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:27.666 13:42:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:27.929 13:42:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:27.929 13:42:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:27.929 13:42:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:27.929 13:42:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:27.929 13:42:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:27.929 13:42:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:27.929 [2024-07-15 13:42:54.336211] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:27.929 [2024-07-15 13:42:54.336280] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid993666 ] 00:12:27.929 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.929 [2024-07-15 13:42:54.369754] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:27.929 [2024-07-15 13:42:54.380134] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:27.929 [2024-07-15 13:42:54.380154] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdf0201d000 00:12:27.929 [2024-07-15 13:42:54.380479] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.929 [2024-07-15 13:42:54.381487] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.929 [2024-07-15 13:42:54.382492] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.929 [2024-07-15 13:42:54.383511] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:27.929 [2024-07-15 13:42:54.384500] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:27.929 [2024-07-15 13:42:54.385509] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.929 [2024-07-15 13:42:54.386517] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:27.929 [2024-07-15 13:42:54.387521] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.929 [2024-07-15 13:42:54.388524] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:27.929 [2024-07-15 13:42:54.388534] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdf02012000 00:12:27.929 [2024-07-15 13:42:54.389864] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:27.929 [2024-07-15 13:42:54.408798] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:27.929 [2024-07-15 13:42:54.408820] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:27.929 [2024-07-15 13:42:54.411672] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:27.929 [2024-07-15 13:42:54.411720] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:27.929 [2024-07-15 13:42:54.411808] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:27.929 [2024-07-15 13:42:54.411826] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:27.929 [2024-07-15 13:42:54.411831] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:27.929 [2024-07-15 13:42:54.412667] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:27.929 [2024-07-15 13:42:54.412677] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:27.929 [2024-07-15 13:42:54.412684] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:27.929 [2024-07-15 13:42:54.413674] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:27.929 [2024-07-15 13:42:54.413683] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:27.929 [2024-07-15 13:42:54.413690] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:27.929 [2024-07-15 13:42:54.414678] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:27.929 [2024-07-15 13:42:54.414687] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:27.929 [2024-07-15 13:42:54.415682] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:27.929 [2024-07-15 13:42:54.415690] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:27.929 [2024-07-15 13:42:54.415695] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:27.929 [2024-07-15 13:42:54.415702] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:27.929 [2024-07-15 13:42:54.415807] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:27.929 [2024-07-15 13:42:54.415812] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:27.929 [2024-07-15 13:42:54.415817] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:27.929 [2024-07-15 13:42:54.416692] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:27.929 [2024-07-15 13:42:54.417693] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:27.929 [2024-07-15 13:42:54.418702] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:27.929 [2024-07-15 13:42:54.419698] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:27.929 [2024-07-15 13:42:54.419751] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:27.929 [2024-07-15 13:42:54.420716] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:27.929 [2024-07-15 13:42:54.420727] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:27.929 [2024-07-15 13:42:54.420732] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:27.929 [2024-07-15 13:42:54.420754] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:27.929 [2024-07-15 13:42:54.420761] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:27.929 [2024-07-15 13:42:54.420776] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:27.929 [2024-07-15 13:42:54.420781] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:27.929 [2024-07-15 13:42:54.420795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:27.929 [2024-07-15 13:42:54.420833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:27.929 [2024-07-15 13:42:54.420843] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:27.929 [2024-07-15 13:42:54.420850] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:27.929 [2024-07-15 13:42:54.420854] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:27.929 [2024-07-15 13:42:54.420859] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:27.929 [2024-07-15 13:42:54.420864] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:27.929 [2024-07-15 13:42:54.420868] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:27.930 [2024-07-15 13:42:54.420873] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.420881] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.420891] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:27.930 [2024-07-15 13:42:54.420902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:27.930 [2024-07-15 13:42:54.420917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.930 [2024-07-15 13:42:54.420926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.930 [2024-07-15 13:42:54.420934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.930 [2024-07-15 13:42:54.420943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.930 [2024-07-15 13:42:54.420948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.420956] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.420965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:27.930 [2024-07-15 13:42:54.420977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:27.930 [2024-07-15 13:42:54.420982] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:27.930 [2024-07-15 13:42:54.420987] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.420994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.421000] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.421009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:27.930 [2024-07-15 13:42:54.421018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:27.930 [2024-07-15 13:42:54.421078] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.421086] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.421093] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:27.930 [2024-07-15 13:42:54.421098] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:27.930 [2024-07-15 13:42:54.421104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:27.930 [2024-07-15 13:42:54.421115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:27.930 [2024-07-15 13:42:54.421129] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:27.930 [2024-07-15 13:42:54.421142] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.421150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.421157] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:27.930 [2024-07-15 13:42:54.421161] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:27.930 [2024-07-15 13:42:54.421167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:27.930 [2024-07-15 13:42:54.421183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:27.930 [2024-07-15 13:42:54.421195] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.421203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.421209] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:27.930 [2024-07-15 13:42:54.421214] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:27.930 [2024-07-15 13:42:54.421220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:27.930 [2024-07-15 13:42:54.421231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:27.930 [2024-07-15 13:42:54.421241] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.421248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.421256] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.421262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.421267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.421272] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.421277] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:27.930 [2024-07-15 13:42:54.421282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:27.930 [2024-07-15 13:42:54.421287] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:27.930 [2024-07-15 13:42:54.421305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:27.930 [2024-07-15 13:42:54.421315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:27.930 [2024-07-15 13:42:54.421327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:27.930 [2024-07-15 13:42:54.421336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:27.930 [2024-07-15 13:42:54.421347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:27.930 [2024-07-15 13:42:54.421356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:27.930 [2024-07-15 13:42:54.421367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:27.930 [2024-07-15 13:42:54.421379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:27.930 [2024-07-15 13:42:54.421392] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:27.930 [2024-07-15 13:42:54.421397] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:27.930 [2024-07-15 13:42:54.421400] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:27.930 [2024-07-15 13:42:54.421404] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:27.930 [2024-07-15 13:42:54.421410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:27.930 [2024-07-15 13:42:54.421418] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:27.930 [2024-07-15 13:42:54.421422] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:27.930 [2024-07-15 13:42:54.421428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:27.930 [2024-07-15 13:42:54.421435] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:27.930 [2024-07-15 13:42:54.421441] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:27.930 [2024-07-15 13:42:54.421446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:27.930 [2024-07-15 13:42:54.421454] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:27.930 [2024-07-15 13:42:54.421458] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:27.930 [2024-07-15 13:42:54.421464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:27.930 [2024-07-15 13:42:54.421471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:27.930 [2024-07-15 13:42:54.421482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:27.930 [2024-07-15 13:42:54.421492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:27.930 [2024-07-15 13:42:54.421499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:27.930 ===================================================== 00:12:27.930 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:27.930 ===================================================== 00:12:27.930 Controller Capabilities/Features 00:12:27.930 ================================ 00:12:27.930 Vendor ID: 4e58 00:12:27.930 Subsystem Vendor ID: 4e58 00:12:27.930 Serial Number: SPDK1 00:12:27.930 Model Number: SPDK bdev Controller 00:12:27.930 Firmware Version: 24.09 00:12:27.930 Recommended Arb Burst: 6 00:12:27.930 IEEE OUI Identifier: 8d 6b 50 00:12:27.930 Multi-path I/O 00:12:27.930 May have multiple subsystem ports: Yes 00:12:27.930 May have multiple controllers: Yes 00:12:27.930 Associated with SR-IOV VF: No 00:12:27.930 Max Data Transfer Size: 131072 00:12:27.930 Max Number of Namespaces: 32 00:12:27.930 Max Number of I/O Queues: 127 00:12:27.930 NVMe Specification Version (VS): 1.3 00:12:27.930 NVMe Specification Version (Identify): 1.3 00:12:27.930 Maximum Queue Entries: 256 00:12:27.930 Contiguous Queues Required: Yes 00:12:27.930 Arbitration Mechanisms Supported 00:12:27.930 Weighted Round Robin: Not Supported 00:12:27.930 Vendor Specific: Not Supported 00:12:27.930 Reset Timeout: 15000 ms 00:12:27.930 Doorbell Stride: 4 bytes 00:12:27.930 NVM Subsystem Reset: Not Supported 00:12:27.930 Command Sets Supported 00:12:27.930 NVM Command Set: Supported 00:12:27.930 Boot Partition: Not Supported 00:12:27.930 Memory Page Size Minimum: 4096 bytes 00:12:27.930 Memory Page Size Maximum: 4096 bytes 00:12:27.930 Persistent Memory Region: Not Supported 00:12:27.931 Optional Asynchronous Events Supported 00:12:27.931 Namespace Attribute Notices: Supported 00:12:27.931 Firmware Activation Notices: Not Supported 00:12:27.931 ANA Change Notices: Not Supported 00:12:27.931 PLE Aggregate Log Change Notices: Not Supported 00:12:27.931 LBA Status Info Alert Notices: Not Supported 00:12:27.931 EGE Aggregate Log Change Notices: Not Supported 00:12:27.931 Normal NVM Subsystem Shutdown event: Not Supported 00:12:27.931 Zone Descriptor Change Notices: Not Supported 00:12:27.931 Discovery Log Change Notices: Not Supported 00:12:27.931 Controller Attributes 00:12:27.931 128-bit Host Identifier: Supported 00:12:27.931 Non-Operational Permissive Mode: Not Supported 00:12:27.931 NVM Sets: Not Supported 00:12:27.931 Read Recovery Levels: Not Supported 00:12:27.931 Endurance Groups: Not Supported 00:12:27.931 Predictable Latency Mode: Not Supported 00:12:27.931 Traffic Based Keep ALive: Not Supported 00:12:27.931 Namespace Granularity: Not Supported 00:12:27.931 SQ Associations: Not Supported 00:12:27.931 UUID List: Not Supported 00:12:27.931 Multi-Domain Subsystem: Not Supported 00:12:27.931 Fixed Capacity Management: Not Supported 00:12:27.931 Variable Capacity Management: Not Supported 00:12:27.931 Delete Endurance Group: Not Supported 00:12:27.931 Delete NVM Set: Not Supported 00:12:27.931 Extended LBA Formats Supported: Not Supported 00:12:27.931 Flexible Data Placement Supported: Not Supported 00:12:27.931 00:12:27.931 Controller Memory Buffer Support 00:12:27.931 ================================ 00:12:27.931 Supported: No 00:12:27.931 00:12:27.931 Persistent Memory Region Support 00:12:27.931 ================================ 00:12:27.931 Supported: No 00:12:27.931 00:12:27.931 Admin Command Set Attributes 00:12:27.931 ============================ 00:12:27.931 Security Send/Receive: Not Supported 00:12:27.931 Format NVM: Not Supported 00:12:27.931 Firmware Activate/Download: Not Supported 00:12:27.931 Namespace Management: Not Supported 00:12:27.931 Device Self-Test: Not Supported 00:12:27.931 Directives: Not Supported 00:12:27.931 NVMe-MI: Not Supported 00:12:27.931 Virtualization Management: Not Supported 00:12:27.931 Doorbell Buffer Config: Not Supported 00:12:27.931 Get LBA Status Capability: Not Supported 00:12:27.931 Command & Feature Lockdown Capability: Not Supported 00:12:27.931 Abort Command Limit: 4 00:12:27.931 Async Event Request Limit: 4 00:12:27.931 Number of Firmware Slots: N/A 00:12:27.931 Firmware Slot 1 Read-Only: N/A 00:12:27.931 Firmware Activation Without Reset: N/A 00:12:27.931 Multiple Update Detection Support: N/A 00:12:27.931 Firmware Update Granularity: No Information Provided 00:12:27.931 Per-Namespace SMART Log: No 00:12:27.931 Asymmetric Namespace Access Log Page: Not Supported 00:12:27.931 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:27.931 Command Effects Log Page: Supported 00:12:27.931 Get Log Page Extended Data: Supported 00:12:27.931 Telemetry Log Pages: Not Supported 00:12:27.931 Persistent Event Log Pages: Not Supported 00:12:27.931 Supported Log Pages Log Page: May Support 00:12:27.931 Commands Supported & Effects Log Page: Not Supported 00:12:27.931 Feature Identifiers & Effects Log Page:May Support 00:12:27.931 NVMe-MI Commands & Effects Log Page: May Support 00:12:27.931 Data Area 4 for Telemetry Log: Not Supported 00:12:27.931 Error Log Page Entries Supported: 128 00:12:27.931 Keep Alive: Supported 00:12:27.931 Keep Alive Granularity: 10000 ms 00:12:27.931 00:12:27.931 NVM Command Set Attributes 00:12:27.931 ========================== 00:12:27.931 Submission Queue Entry Size 00:12:27.931 Max: 64 00:12:27.931 Min: 64 00:12:27.931 Completion Queue Entry Size 00:12:27.931 Max: 16 00:12:27.931 Min: 16 00:12:27.931 Number of Namespaces: 32 00:12:27.931 Compare Command: Supported 00:12:27.931 Write Uncorrectable Command: Not Supported 00:12:27.931 Dataset Management Command: Supported 00:12:27.931 Write Zeroes Command: Supported 00:12:27.931 Set Features Save Field: Not Supported 00:12:27.931 Reservations: Not Supported 00:12:27.931 Timestamp: Not Supported 00:12:27.931 Copy: Supported 00:12:27.931 Volatile Write Cache: Present 00:12:27.931 Atomic Write Unit (Normal): 1 00:12:27.931 Atomic Write Unit (PFail): 1 00:12:27.931 Atomic Compare & Write Unit: 1 00:12:27.931 Fused Compare & Write: Supported 00:12:27.931 Scatter-Gather List 00:12:27.931 SGL Command Set: Supported (Dword aligned) 00:12:27.931 SGL Keyed: Not Supported 00:12:27.931 SGL Bit Bucket Descriptor: Not Supported 00:12:27.931 SGL Metadata Pointer: Not Supported 00:12:27.931 Oversized SGL: Not Supported 00:12:27.931 SGL Metadata Address: Not Supported 00:12:27.931 SGL Offset: Not Supported 00:12:27.931 Transport SGL Data Block: Not Supported 00:12:27.931 Replay Protected Memory Block: Not Supported 00:12:27.931 00:12:27.931 Firmware Slot Information 00:12:27.931 ========================= 00:12:27.931 Active slot: 1 00:12:27.931 Slot 1 Firmware Revision: 24.09 00:12:27.931 00:12:27.931 00:12:27.931 Commands Supported and Effects 00:12:27.931 ============================== 00:12:27.931 Admin Commands 00:12:27.931 -------------- 00:12:27.931 Get Log Page (02h): Supported 00:12:27.931 Identify (06h): Supported 00:12:27.931 Abort (08h): Supported 00:12:27.931 Set Features (09h): Supported 00:12:27.931 Get Features (0Ah): Supported 00:12:27.931 Asynchronous Event Request (0Ch): Supported 00:12:27.931 Keep Alive (18h): Supported 00:12:27.931 I/O Commands 00:12:27.931 ------------ 00:12:27.931 Flush (00h): Supported LBA-Change 00:12:27.931 Write (01h): Supported LBA-Change 00:12:27.931 Read (02h): Supported 00:12:27.931 Compare (05h): Supported 00:12:27.931 Write Zeroes (08h): Supported LBA-Change 00:12:27.931 Dataset Management (09h): Supported LBA-Change 00:12:27.931 Copy (19h): Supported LBA-Change 00:12:27.931 00:12:27.931 Error Log 00:12:27.931 ========= 00:12:27.931 00:12:27.931 Arbitration 00:12:27.931 =========== 00:12:27.931 Arbitration Burst: 1 00:12:27.931 00:12:27.931 Power Management 00:12:27.931 ================ 00:12:27.931 Number of Power States: 1 00:12:27.931 Current Power State: Power State #0 00:12:27.931 Power State #0: 00:12:27.931 Max Power: 0.00 W 00:12:27.931 Non-Operational State: Operational 00:12:27.931 Entry Latency: Not Reported 00:12:27.931 Exit Latency: Not Reported 00:12:27.931 Relative Read Throughput: 0 00:12:27.931 Relative Read Latency: 0 00:12:27.931 Relative Write Throughput: 0 00:12:27.931 Relative Write Latency: 0 00:12:27.931 Idle Power: Not Reported 00:12:27.931 Active Power: Not Reported 00:12:27.931 Non-Operational Permissive Mode: Not Supported 00:12:27.931 00:12:27.931 Health Information 00:12:27.931 ================== 00:12:27.931 Critical Warnings: 00:12:27.931 Available Spare Space: OK 00:12:27.931 Temperature: OK 00:12:27.931 Device Reliability: OK 00:12:27.931 Read Only: No 00:12:27.931 Volatile Memory Backup: OK 00:12:27.931 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:27.931 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:27.931 Available Spare: 0% 00:12:27.931 Available Sp[2024-07-15 13:42:54.421692] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:27.931 [2024-07-15 13:42:54.421701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:27.931 [2024-07-15 13:42:54.421730] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:27.931 [2024-07-15 13:42:54.421740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.931 [2024-07-15 13:42:54.421747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.931 [2024-07-15 13:42:54.421753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.931 [2024-07-15 13:42:54.421759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.931 [2024-07-15 13:42:54.422728] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:27.931 [2024-07-15 13:42:54.422740] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:27.931 [2024-07-15 13:42:54.423727] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:27.931 [2024-07-15 13:42:54.423769] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:27.931 [2024-07-15 13:42:54.423775] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:27.931 [2024-07-15 13:42:54.424737] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:27.931 [2024-07-15 13:42:54.424748] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:27.931 [2024-07-15 13:42:54.424811] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:27.931 [2024-07-15 13:42:54.429131] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:28.192 are Threshold: 0% 00:12:28.192 Life Percentage Used: 0% 00:12:28.192 Data Units Read: 0 00:12:28.192 Data Units Written: 0 00:12:28.192 Host Read Commands: 0 00:12:28.192 Host Write Commands: 0 00:12:28.192 Controller Busy Time: 0 minutes 00:12:28.192 Power Cycles: 0 00:12:28.192 Power On Hours: 0 hours 00:12:28.192 Unsafe Shutdowns: 0 00:12:28.192 Unrecoverable Media Errors: 0 00:12:28.192 Lifetime Error Log Entries: 0 00:12:28.192 Warning Temperature Time: 0 minutes 00:12:28.192 Critical Temperature Time: 0 minutes 00:12:28.192 00:12:28.192 Number of Queues 00:12:28.192 ================ 00:12:28.192 Number of I/O Submission Queues: 127 00:12:28.192 Number of I/O Completion Queues: 127 00:12:28.192 00:12:28.192 Active Namespaces 00:12:28.192 ================= 00:12:28.192 Namespace ID:1 00:12:28.192 Error Recovery Timeout: Unlimited 00:12:28.192 Command Set Identifier: NVM (00h) 00:12:28.192 Deallocate: Supported 00:12:28.192 Deallocated/Unwritten Error: Not Supported 00:12:28.192 Deallocated Read Value: Unknown 00:12:28.192 Deallocate in Write Zeroes: Not Supported 00:12:28.192 Deallocated Guard Field: 0xFFFF 00:12:28.192 Flush: Supported 00:12:28.192 Reservation: Supported 00:12:28.192 Namespace Sharing Capabilities: Multiple Controllers 00:12:28.192 Size (in LBAs): 131072 (0GiB) 00:12:28.192 Capacity (in LBAs): 131072 (0GiB) 00:12:28.192 Utilization (in LBAs): 131072 (0GiB) 00:12:28.192 NGUID: 2749E5AAA1CE4D2F9B34702B195282EB 00:12:28.192 UUID: 2749e5aa-a1ce-4d2f-9b34-702b195282eb 00:12:28.192 Thin Provisioning: Not Supported 00:12:28.192 Per-NS Atomic Units: Yes 00:12:28.192 Atomic Boundary Size (Normal): 0 00:12:28.192 Atomic Boundary Size (PFail): 0 00:12:28.192 Atomic Boundary Offset: 0 00:12:28.192 Maximum Single Source Range Length: 65535 00:12:28.192 Maximum Copy Length: 65535 00:12:28.192 Maximum Source Range Count: 1 00:12:28.192 NGUID/EUI64 Never Reused: No 00:12:28.192 Namespace Write Protected: No 00:12:28.192 Number of LBA Formats: 1 00:12:28.192 Current LBA Format: LBA Format #00 00:12:28.192 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:28.192 00:12:28.192 13:42:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:28.192 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.192 [2024-07-15 13:42:54.613636] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:33.479 Initializing NVMe Controllers 00:12:33.479 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:33.479 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:33.479 Initialization complete. Launching workers. 00:12:33.479 ======================================================== 00:12:33.479 Latency(us) 00:12:33.479 Device Information : IOPS MiB/s Average min max 00:12:33.479 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40063.84 156.50 3194.58 829.98 6830.13 00:12:33.479 ======================================================== 00:12:33.479 Total : 40063.84 156.50 3194.58 829.98 6830.13 00:12:33.479 00:12:33.479 [2024-07-15 13:42:59.629957] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:33.479 13:42:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:33.479 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.479 [2024-07-15 13:42:59.813817] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:38.760 Initializing NVMe Controllers 00:12:38.760 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:38.760 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:38.760 Initialization complete. Launching workers. 00:12:38.760 ======================================================== 00:12:38.760 Latency(us) 00:12:38.760 Device Information : IOPS MiB/s Average min max 00:12:38.760 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16053.17 62.71 7979.06 5982.00 9977.94 00:12:38.760 ======================================================== 00:12:38.760 Total : 16053.17 62.71 7979.06 5982.00 9977.94 00:12:38.760 00:12:38.760 [2024-07-15 13:43:04.853941] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:38.760 13:43:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:38.760 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.760 [2024-07-15 13:43:05.048847] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:44.046 [2024-07-15 13:43:10.113316] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:44.046 Initializing NVMe Controllers 00:12:44.046 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:44.046 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:44.046 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:44.046 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:44.046 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:44.046 Initialization complete. Launching workers. 00:12:44.046 Starting thread on core 2 00:12:44.046 Starting thread on core 3 00:12:44.046 Starting thread on core 1 00:12:44.046 13:43:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:44.046 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.046 [2024-07-15 13:43:10.372543] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:48.250 [2024-07-15 13:43:14.279255] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:48.250 Initializing NVMe Controllers 00:12:48.250 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:48.250 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:48.250 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:48.250 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:48.250 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:48.250 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:48.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:48.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:48.250 Initialization complete. Launching workers. 00:12:48.250 Starting thread on core 1 with urgent priority queue 00:12:48.250 Starting thread on core 2 with urgent priority queue 00:12:48.250 Starting thread on core 3 with urgent priority queue 00:12:48.250 Starting thread on core 0 with urgent priority queue 00:12:48.250 SPDK bdev Controller (SPDK1 ) core 0: 1555.00 IO/s 64.31 secs/100000 ios 00:12:48.250 SPDK bdev Controller (SPDK1 ) core 1: 1435.67 IO/s 69.65 secs/100000 ios 00:12:48.250 SPDK bdev Controller (SPDK1 ) core 2: 1155.67 IO/s 86.53 secs/100000 ios 00:12:48.250 SPDK bdev Controller (SPDK1 ) core 3: 1479.00 IO/s 67.61 secs/100000 ios 00:12:48.250 ======================================================== 00:12:48.250 00:12:48.250 13:43:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:48.250 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.250 [2024-07-15 13:43:14.543648] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:48.250 Initializing NVMe Controllers 00:12:48.250 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:48.250 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:48.250 Namespace ID: 1 size: 0GB 00:12:48.250 Initialization complete. 00:12:48.250 INFO: using host memory buffer for IO 00:12:48.250 Hello world! 00:12:48.250 [2024-07-15 13:43:14.578874] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:48.250 13:43:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:48.250 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.511 [2024-07-15 13:43:14.837597] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:49.452 Initializing NVMe Controllers 00:12:49.452 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:49.452 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:49.452 Initialization complete. Launching workers. 00:12:49.452 submit (in ns) avg, min, max = 7851.3, 3905.8, 4000093.3 00:12:49.452 complete (in ns) avg, min, max = 16527.7, 2370.0, 3998495.0 00:12:49.452 00:12:49.452 Submit histogram 00:12:49.452 ================ 00:12:49.452 Range in us Cumulative Count 00:12:49.452 3.893 - 3.920: 0.6718% ( 128) 00:12:49.452 3.920 - 3.947: 3.4378% ( 527) 00:12:49.452 3.947 - 3.973: 11.6045% ( 1556) 00:12:49.452 3.973 - 4.000: 22.9885% ( 2169) 00:12:49.452 4.000 - 4.027: 34.4618% ( 2186) 00:12:49.452 4.027 - 4.053: 44.9063% ( 1990) 00:12:49.452 4.053 - 4.080: 60.4997% ( 2971) 00:12:49.452 4.080 - 4.107: 75.5734% ( 2872) 00:12:49.452 4.107 - 4.133: 88.1069% ( 2388) 00:12:49.452 4.133 - 4.160: 94.8932% ( 1293) 00:12:49.452 4.160 - 4.187: 98.0213% ( 596) 00:12:49.452 4.187 - 4.213: 99.0500% ( 196) 00:12:49.452 4.213 - 4.240: 99.3229% ( 52) 00:12:49.452 4.240 - 4.267: 99.4069% ( 16) 00:12:49.452 4.267 - 4.293: 99.4174% ( 2) 00:12:49.452 4.293 - 4.320: 99.4279% ( 2) 00:12:49.452 4.667 - 4.693: 99.4332% ( 1) 00:12:49.452 4.773 - 4.800: 99.4437% ( 2) 00:12:49.452 4.800 - 4.827: 99.4489% ( 1) 00:12:49.452 4.827 - 4.853: 99.4542% ( 1) 00:12:49.452 5.120 - 5.147: 99.4647% ( 2) 00:12:49.452 5.147 - 5.173: 99.4699% ( 1) 00:12:49.452 5.227 - 5.253: 99.4751% ( 1) 00:12:49.452 5.440 - 5.467: 99.4804% ( 1) 00:12:49.452 5.467 - 5.493: 99.4856% ( 1) 00:12:49.452 5.653 - 5.680: 99.4909% ( 1) 00:12:49.452 5.760 - 5.787: 99.4961% ( 1) 00:12:49.452 5.893 - 5.920: 99.5014% ( 1) 00:12:49.452 5.920 - 5.947: 99.5119% ( 2) 00:12:49.452 5.947 - 5.973: 99.5171% ( 1) 00:12:49.452 5.973 - 6.000: 99.5224% ( 1) 00:12:49.452 6.000 - 6.027: 99.5381% ( 3) 00:12:49.452 6.027 - 6.053: 99.5434% ( 1) 00:12:49.452 6.053 - 6.080: 99.5486% ( 1) 00:12:49.452 6.080 - 6.107: 99.5644% ( 3) 00:12:49.452 6.107 - 6.133: 99.5749% ( 2) 00:12:49.452 6.160 - 6.187: 99.5854% ( 2) 00:12:49.452 6.213 - 6.240: 99.5906% ( 1) 00:12:49.452 6.240 - 6.267: 99.5959% ( 1) 00:12:49.452 6.267 - 6.293: 99.6011% ( 1) 00:12:49.452 6.373 - 6.400: 99.6116% ( 2) 00:12:49.452 6.400 - 6.427: 99.6169% ( 1) 00:12:49.452 6.427 - 6.453: 99.6221% ( 1) 00:12:49.452 6.453 - 6.480: 99.6274% ( 1) 00:12:49.452 6.480 - 6.507: 99.6379% ( 2) 00:12:49.452 6.533 - 6.560: 99.6431% ( 1) 00:12:49.452 6.560 - 6.587: 99.6483% ( 1) 00:12:49.452 6.587 - 6.613: 99.6536% ( 1) 00:12:49.452 6.640 - 6.667: 99.6641% ( 2) 00:12:49.452 6.667 - 6.693: 99.6746% ( 2) 00:12:49.452 6.693 - 6.720: 99.6798% ( 1) 00:12:49.452 6.720 - 6.747: 99.6851% ( 1) 00:12:49.452 6.747 - 6.773: 99.7008% ( 3) 00:12:49.452 6.773 - 6.800: 99.7113% ( 2) 00:12:49.452 6.800 - 6.827: 99.7166% ( 1) 00:12:49.452 6.827 - 6.880: 99.7323% ( 3) 00:12:49.452 6.933 - 6.987: 99.7481% ( 3) 00:12:49.452 6.987 - 7.040: 99.7586% ( 2) 00:12:49.452 7.040 - 7.093: 99.7743% ( 3) 00:12:49.452 7.093 - 7.147: 99.8058% ( 6) 00:12:49.452 7.200 - 7.253: 99.8163% ( 2) 00:12:49.452 7.253 - 7.307: 99.8216% ( 1) 00:12:49.452 7.307 - 7.360: 99.8320% ( 2) 00:12:49.452 7.360 - 7.413: 99.8373% ( 1) 00:12:49.452 7.413 - 7.467: 99.8478% ( 2) 00:12:49.452 7.467 - 7.520: 99.8530% ( 1) 00:12:49.452 7.573 - 7.627: 99.8635% ( 2) 00:12:49.452 7.627 - 7.680: 99.8688% ( 1) 00:12:49.452 7.733 - 7.787: 99.8793% ( 2) 00:12:49.452 8.427 - 8.480: 99.8845% ( 1) 00:12:49.452 12.000 - 12.053: 99.8898% ( 1) 00:12:49.452 15.360 - 15.467: 99.8950% ( 1) 00:12:49.452 34.133 - 34.347: 99.9003% ( 1) 00:12:49.452 147.627 - 148.480: 99.9055% ( 1) 00:12:49.452 3986.773 - 4014.080: 100.0000% ( 18) 00:12:49.452 00:12:49.452 Complete histogram 00:12:49.452 ================== 00:12:49.452 Range in us Cumulative Count 00:12:49.452 2.360 - 2.373: 0.0052% ( 1) 00:12:49.452 2.373 - 2.387: 0.0315% ( 5) 00:12:49.452 2.387 - [2024-07-15 13:43:15.858075] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:49.452 2.400: 1.0392% ( 192) 00:12:49.452 2.400 - 2.413: 1.1337% ( 18) 00:12:49.452 2.413 - 2.427: 1.2911% ( 30) 00:12:49.452 2.427 - 2.440: 5.6736% ( 835) 00:12:49.452 2.440 - 2.453: 53.8760% ( 9184) 00:12:49.452 2.453 - 2.467: 60.0798% ( 1182) 00:12:49.452 2.467 - 2.480: 73.3795% ( 2534) 00:12:49.452 2.480 - 2.493: 80.0924% ( 1279) 00:12:49.452 2.493 - 2.507: 81.8139% ( 328) 00:12:49.452 2.507 - 2.520: 87.3826% ( 1061) 00:12:49.452 2.520 - 2.533: 93.1297% ( 1095) 00:12:49.452 2.533 - 2.547: 95.9586% ( 539) 00:12:49.452 2.547 - 2.560: 98.1000% ( 408) 00:12:49.452 2.560 - 2.573: 99.1550% ( 201) 00:12:49.452 2.573 - 2.587: 99.4069% ( 48) 00:12:49.452 2.587 - 2.600: 99.4332% ( 5) 00:12:49.452 2.600 - 2.613: 99.4437% ( 2) 00:12:49.452 2.640 - 2.653: 99.4489% ( 1) 00:12:49.452 2.653 - 2.667: 99.4542% ( 1) 00:12:49.452 2.813 - 2.827: 99.4594% ( 1) 00:12:49.452 3.147 - 3.160: 99.4647% ( 1) 00:12:49.452 4.240 - 4.267: 99.4699% ( 1) 00:12:49.452 4.293 - 4.320: 99.4751% ( 1) 00:12:49.452 4.320 - 4.347: 99.4856% ( 2) 00:12:49.452 4.453 - 4.480: 99.4909% ( 1) 00:12:49.452 4.480 - 4.507: 99.4961% ( 1) 00:12:49.452 4.507 - 4.533: 99.5014% ( 1) 00:12:49.452 4.613 - 4.640: 99.5066% ( 1) 00:12:49.452 4.667 - 4.693: 99.5119% ( 1) 00:12:49.452 4.693 - 4.720: 99.5224% ( 2) 00:12:49.452 4.800 - 4.827: 99.5276% ( 1) 00:12:49.452 4.880 - 4.907: 99.5434% ( 3) 00:12:49.452 5.013 - 5.040: 99.5486% ( 1) 00:12:49.452 5.067 - 5.093: 99.5539% ( 1) 00:12:49.452 5.227 - 5.253: 99.5591% ( 1) 00:12:49.452 5.387 - 5.413: 99.5696% ( 2) 00:12:49.452 5.547 - 5.573: 99.5801% ( 2) 00:12:49.452 5.573 - 5.600: 99.5906% ( 2) 00:12:49.452 5.653 - 5.680: 99.6011% ( 2) 00:12:49.452 5.760 - 5.787: 99.6064% ( 1) 00:12:49.452 5.787 - 5.813: 99.6116% ( 1) 00:12:49.452 5.813 - 5.840: 99.6169% ( 1) 00:12:49.452 5.867 - 5.893: 99.6221% ( 1) 00:12:49.452 10.187 - 10.240: 99.6274% ( 1) 00:12:49.452 10.613 - 10.667: 99.6326% ( 1) 00:12:49.452 11.627 - 11.680: 99.6379% ( 1) 00:12:49.452 44.800 - 45.013: 99.6431% ( 1) 00:12:49.452 139.093 - 139.947: 99.6483% ( 1) 00:12:49.452 3986.773 - 4014.080: 100.0000% ( 67) 00:12:49.452 00:12:49.452 13:43:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:49.453 13:43:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:49.453 13:43:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:49.453 13:43:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:49.453 13:43:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:49.713 [ 00:12:49.713 { 00:12:49.713 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:49.713 "subtype": "Discovery", 00:12:49.713 "listen_addresses": [], 00:12:49.713 "allow_any_host": true, 00:12:49.713 "hosts": [] 00:12:49.713 }, 00:12:49.713 { 00:12:49.713 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:49.713 "subtype": "NVMe", 00:12:49.713 "listen_addresses": [ 00:12:49.713 { 00:12:49.713 "trtype": "VFIOUSER", 00:12:49.713 "adrfam": "IPv4", 00:12:49.713 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:49.713 "trsvcid": "0" 00:12:49.713 } 00:12:49.713 ], 00:12:49.713 "allow_any_host": true, 00:12:49.713 "hosts": [], 00:12:49.713 "serial_number": "SPDK1", 00:12:49.713 "model_number": "SPDK bdev Controller", 00:12:49.713 "max_namespaces": 32, 00:12:49.713 "min_cntlid": 1, 00:12:49.713 "max_cntlid": 65519, 00:12:49.713 "namespaces": [ 00:12:49.713 { 00:12:49.713 "nsid": 1, 00:12:49.713 "bdev_name": "Malloc1", 00:12:49.713 "name": "Malloc1", 00:12:49.713 "nguid": "2749E5AAA1CE4D2F9B34702B195282EB", 00:12:49.713 "uuid": "2749e5aa-a1ce-4d2f-9b34-702b195282eb" 00:12:49.713 } 00:12:49.713 ] 00:12:49.713 }, 00:12:49.713 { 00:12:49.713 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:49.713 "subtype": "NVMe", 00:12:49.713 "listen_addresses": [ 00:12:49.713 { 00:12:49.713 "trtype": "VFIOUSER", 00:12:49.713 "adrfam": "IPv4", 00:12:49.713 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:49.713 "trsvcid": "0" 00:12:49.713 } 00:12:49.713 ], 00:12:49.713 "allow_any_host": true, 00:12:49.713 "hosts": [], 00:12:49.713 "serial_number": "SPDK2", 00:12:49.713 "model_number": "SPDK bdev Controller", 00:12:49.713 "max_namespaces": 32, 00:12:49.713 "min_cntlid": 1, 00:12:49.713 "max_cntlid": 65519, 00:12:49.713 "namespaces": [ 00:12:49.713 { 00:12:49.713 "nsid": 1, 00:12:49.713 "bdev_name": "Malloc2", 00:12:49.713 "name": "Malloc2", 00:12:49.713 "nguid": "C8507890D1C54A01936D61AD9F8F8C78", 00:12:49.713 "uuid": "c8507890-d1c5-4a01-936d-61ad9f8f8c78" 00:12:49.713 } 00:12:49.713 ] 00:12:49.713 } 00:12:49.713 ] 00:12:49.713 13:43:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:49.713 13:43:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=997841 00:12:49.713 13:43:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:49.713 13:43:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:49.713 13:43:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:49.713 13:43:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:49.713 13:43:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:49.713 13:43:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:49.713 13:43:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:49.713 13:43:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:49.713 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.713 Malloc3 00:12:49.974 [2024-07-15 13:43:16.241598] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:49.974 13:43:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:49.974 [2024-07-15 13:43:16.412704] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:49.974 13:43:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:49.974 Asynchronous Event Request test 00:12:49.974 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:49.974 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:49.974 Registering asynchronous event callbacks... 00:12:49.974 Starting namespace attribute notice tests for all controllers... 00:12:49.974 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:49.974 aer_cb - Changed Namespace 00:12:49.974 Cleaning up... 00:12:50.268 [ 00:12:50.268 { 00:12:50.268 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:50.268 "subtype": "Discovery", 00:12:50.268 "listen_addresses": [], 00:12:50.268 "allow_any_host": true, 00:12:50.268 "hosts": [] 00:12:50.268 }, 00:12:50.268 { 00:12:50.268 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:50.268 "subtype": "NVMe", 00:12:50.268 "listen_addresses": [ 00:12:50.268 { 00:12:50.268 "trtype": "VFIOUSER", 00:12:50.268 "adrfam": "IPv4", 00:12:50.268 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:50.268 "trsvcid": "0" 00:12:50.268 } 00:12:50.268 ], 00:12:50.268 "allow_any_host": true, 00:12:50.268 "hosts": [], 00:12:50.268 "serial_number": "SPDK1", 00:12:50.268 "model_number": "SPDK bdev Controller", 00:12:50.268 "max_namespaces": 32, 00:12:50.268 "min_cntlid": 1, 00:12:50.268 "max_cntlid": 65519, 00:12:50.268 "namespaces": [ 00:12:50.268 { 00:12:50.268 "nsid": 1, 00:12:50.268 "bdev_name": "Malloc1", 00:12:50.268 "name": "Malloc1", 00:12:50.268 "nguid": "2749E5AAA1CE4D2F9B34702B195282EB", 00:12:50.268 "uuid": "2749e5aa-a1ce-4d2f-9b34-702b195282eb" 00:12:50.268 }, 00:12:50.268 { 00:12:50.268 "nsid": 2, 00:12:50.268 "bdev_name": "Malloc3", 00:12:50.268 "name": "Malloc3", 00:12:50.268 "nguid": "C96D39A32A2749D5B4097E175884A72B", 00:12:50.268 "uuid": "c96d39a3-2a27-49d5-b409-7e175884a72b" 00:12:50.268 } 00:12:50.268 ] 00:12:50.268 }, 00:12:50.268 { 00:12:50.268 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:50.268 "subtype": "NVMe", 00:12:50.268 "listen_addresses": [ 00:12:50.268 { 00:12:50.268 "trtype": "VFIOUSER", 00:12:50.268 "adrfam": "IPv4", 00:12:50.268 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:50.268 "trsvcid": "0" 00:12:50.268 } 00:12:50.268 ], 00:12:50.268 "allow_any_host": true, 00:12:50.268 "hosts": [], 00:12:50.268 "serial_number": "SPDK2", 00:12:50.268 "model_number": "SPDK bdev Controller", 00:12:50.268 "max_namespaces": 32, 00:12:50.268 "min_cntlid": 1, 00:12:50.268 "max_cntlid": 65519, 00:12:50.268 "namespaces": [ 00:12:50.268 { 00:12:50.268 "nsid": 1, 00:12:50.268 "bdev_name": "Malloc2", 00:12:50.268 "name": "Malloc2", 00:12:50.268 "nguid": "C8507890D1C54A01936D61AD9F8F8C78", 00:12:50.268 "uuid": "c8507890-d1c5-4a01-936d-61ad9f8f8c78" 00:12:50.268 } 00:12:50.268 ] 00:12:50.268 } 00:12:50.268 ] 00:12:50.268 13:43:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 997841 00:12:50.268 13:43:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:50.268 13:43:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:50.268 13:43:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:50.268 13:43:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:50.268 [2024-07-15 13:43:16.635130] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:50.268 [2024-07-15 13:43:16.635177] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid998087 ] 00:12:50.268 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.268 [2024-07-15 13:43:16.670667] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:50.268 [2024-07-15 13:43:16.679394] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:50.268 [2024-07-15 13:43:16.679416] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc72ef9e000 00:12:50.268 [2024-07-15 13:43:16.680391] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:50.268 [2024-07-15 13:43:16.681395] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:50.268 [2024-07-15 13:43:16.682402] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:50.268 [2024-07-15 13:43:16.683414] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:50.268 [2024-07-15 13:43:16.684421] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:50.268 [2024-07-15 13:43:16.685427] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:50.268 [2024-07-15 13:43:16.686433] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:50.268 [2024-07-15 13:43:16.687438] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:50.268 [2024-07-15 13:43:16.688452] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:50.268 [2024-07-15 13:43:16.688462] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc72ef93000 00:12:50.268 [2024-07-15 13:43:16.689787] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:50.268 [2024-07-15 13:43:16.710280] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:50.268 [2024-07-15 13:43:16.710304] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:50.268 [2024-07-15 13:43:16.712370] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:50.268 [2024-07-15 13:43:16.712414] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:50.269 [2024-07-15 13:43:16.712496] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:50.269 [2024-07-15 13:43:16.712512] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:50.269 [2024-07-15 13:43:16.712518] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:50.269 [2024-07-15 13:43:16.713373] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:50.269 [2024-07-15 13:43:16.713383] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:50.269 [2024-07-15 13:43:16.713390] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:50.269 [2024-07-15 13:43:16.714379] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:50.269 [2024-07-15 13:43:16.714389] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:50.269 [2024-07-15 13:43:16.714396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:50.269 [2024-07-15 13:43:16.715384] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:50.269 [2024-07-15 13:43:16.715394] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:50.269 [2024-07-15 13:43:16.716394] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:50.269 [2024-07-15 13:43:16.716404] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:50.269 [2024-07-15 13:43:16.716409] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:50.269 [2024-07-15 13:43:16.716415] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:50.269 [2024-07-15 13:43:16.716521] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:50.269 [2024-07-15 13:43:16.716526] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:50.269 [2024-07-15 13:43:16.716530] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:50.269 [2024-07-15 13:43:16.717400] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:50.269 [2024-07-15 13:43:16.718405] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:50.269 [2024-07-15 13:43:16.719412] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:50.269 [2024-07-15 13:43:16.720414] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:50.269 [2024-07-15 13:43:16.720455] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:50.269 [2024-07-15 13:43:16.721423] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:50.269 [2024-07-15 13:43:16.721432] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:50.269 [2024-07-15 13:43:16.721437] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.721458] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:50.269 [2024-07-15 13:43:16.721466] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.721478] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:50.269 [2024-07-15 13:43:16.721484] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:50.269 [2024-07-15 13:43:16.721496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:50.269 [2024-07-15 13:43:16.728130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:50.269 [2024-07-15 13:43:16.728142] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:50.269 [2024-07-15 13:43:16.728149] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:50.269 [2024-07-15 13:43:16.728154] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:50.269 [2024-07-15 13:43:16.728158] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:50.269 [2024-07-15 13:43:16.728163] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:50.269 [2024-07-15 13:43:16.728167] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:50.269 [2024-07-15 13:43:16.728172] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.728180] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.728190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:50.269 [2024-07-15 13:43:16.736128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:50.269 [2024-07-15 13:43:16.736143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.269 [2024-07-15 13:43:16.736153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.269 [2024-07-15 13:43:16.736162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.269 [2024-07-15 13:43:16.736170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.269 [2024-07-15 13:43:16.736175] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.736183] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.736193] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:50.269 [2024-07-15 13:43:16.744130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:50.269 [2024-07-15 13:43:16.744149] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:50.269 [2024-07-15 13:43:16.744154] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.744161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.744166] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.744175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:50.269 [2024-07-15 13:43:16.752130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:50.269 [2024-07-15 13:43:16.752194] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.752202] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.752210] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:50.269 [2024-07-15 13:43:16.752214] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:50.269 [2024-07-15 13:43:16.752220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:50.269 [2024-07-15 13:43:16.760130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:50.269 [2024-07-15 13:43:16.760141] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:50.269 [2024-07-15 13:43:16.760154] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.760161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.760168] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:50.269 [2024-07-15 13:43:16.760173] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:50.269 [2024-07-15 13:43:16.760179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:50.269 [2024-07-15 13:43:16.768129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:50.269 [2024-07-15 13:43:16.768145] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.768153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.768160] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:50.269 [2024-07-15 13:43:16.768165] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:50.269 [2024-07-15 13:43:16.768171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:50.269 [2024-07-15 13:43:16.776127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:50.269 [2024-07-15 13:43:16.776136] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.776143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.776150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.776156] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.776161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.776166] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.776171] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:50.269 [2024-07-15 13:43:16.776175] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:50.269 [2024-07-15 13:43:16.776180] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:50.270 [2024-07-15 13:43:16.776197] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:50.270 [2024-07-15 13:43:16.784128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:50.270 [2024-07-15 13:43:16.784141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:50.531 [2024-07-15 13:43:16.792129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:50.531 [2024-07-15 13:43:16.792143] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:50.531 [2024-07-15 13:43:16.800127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:50.531 [2024-07-15 13:43:16.800140] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:50.531 [2024-07-15 13:43:16.808129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:50.531 [2024-07-15 13:43:16.808144] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:50.531 [2024-07-15 13:43:16.808149] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:50.531 [2024-07-15 13:43:16.808155] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:50.531 [2024-07-15 13:43:16.808159] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:50.531 [2024-07-15 13:43:16.808165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:50.531 [2024-07-15 13:43:16.808173] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:50.531 [2024-07-15 13:43:16.808177] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:50.531 [2024-07-15 13:43:16.808183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:50.531 [2024-07-15 13:43:16.808190] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:50.531 [2024-07-15 13:43:16.808194] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:50.531 [2024-07-15 13:43:16.808200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:50.531 [2024-07-15 13:43:16.808208] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:50.531 [2024-07-15 13:43:16.808212] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:50.531 [2024-07-15 13:43:16.808218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:50.531 [2024-07-15 13:43:16.816128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:50.531 [2024-07-15 13:43:16.816142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:50.531 [2024-07-15 13:43:16.816152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:50.531 [2024-07-15 13:43:16.816159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:50.531 ===================================================== 00:12:50.531 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:50.531 ===================================================== 00:12:50.531 Controller Capabilities/Features 00:12:50.531 ================================ 00:12:50.531 Vendor ID: 4e58 00:12:50.531 Subsystem Vendor ID: 4e58 00:12:50.531 Serial Number: SPDK2 00:12:50.531 Model Number: SPDK bdev Controller 00:12:50.531 Firmware Version: 24.09 00:12:50.531 Recommended Arb Burst: 6 00:12:50.531 IEEE OUI Identifier: 8d 6b 50 00:12:50.531 Multi-path I/O 00:12:50.531 May have multiple subsystem ports: Yes 00:12:50.531 May have multiple controllers: Yes 00:12:50.531 Associated with SR-IOV VF: No 00:12:50.531 Max Data Transfer Size: 131072 00:12:50.531 Max Number of Namespaces: 32 00:12:50.531 Max Number of I/O Queues: 127 00:12:50.531 NVMe Specification Version (VS): 1.3 00:12:50.531 NVMe Specification Version (Identify): 1.3 00:12:50.531 Maximum Queue Entries: 256 00:12:50.531 Contiguous Queues Required: Yes 00:12:50.531 Arbitration Mechanisms Supported 00:12:50.531 Weighted Round Robin: Not Supported 00:12:50.531 Vendor Specific: Not Supported 00:12:50.531 Reset Timeout: 15000 ms 00:12:50.531 Doorbell Stride: 4 bytes 00:12:50.531 NVM Subsystem Reset: Not Supported 00:12:50.531 Command Sets Supported 00:12:50.531 NVM Command Set: Supported 00:12:50.531 Boot Partition: Not Supported 00:12:50.531 Memory Page Size Minimum: 4096 bytes 00:12:50.531 Memory Page Size Maximum: 4096 bytes 00:12:50.531 Persistent Memory Region: Not Supported 00:12:50.531 Optional Asynchronous Events Supported 00:12:50.531 Namespace Attribute Notices: Supported 00:12:50.531 Firmware Activation Notices: Not Supported 00:12:50.531 ANA Change Notices: Not Supported 00:12:50.531 PLE Aggregate Log Change Notices: Not Supported 00:12:50.531 LBA Status Info Alert Notices: Not Supported 00:12:50.531 EGE Aggregate Log Change Notices: Not Supported 00:12:50.531 Normal NVM Subsystem Shutdown event: Not Supported 00:12:50.531 Zone Descriptor Change Notices: Not Supported 00:12:50.531 Discovery Log Change Notices: Not Supported 00:12:50.531 Controller Attributes 00:12:50.531 128-bit Host Identifier: Supported 00:12:50.531 Non-Operational Permissive Mode: Not Supported 00:12:50.531 NVM Sets: Not Supported 00:12:50.531 Read Recovery Levels: Not Supported 00:12:50.531 Endurance Groups: Not Supported 00:12:50.531 Predictable Latency Mode: Not Supported 00:12:50.531 Traffic Based Keep ALive: Not Supported 00:12:50.531 Namespace Granularity: Not Supported 00:12:50.531 SQ Associations: Not Supported 00:12:50.531 UUID List: Not Supported 00:12:50.531 Multi-Domain Subsystem: Not Supported 00:12:50.531 Fixed Capacity Management: Not Supported 00:12:50.531 Variable Capacity Management: Not Supported 00:12:50.531 Delete Endurance Group: Not Supported 00:12:50.531 Delete NVM Set: Not Supported 00:12:50.531 Extended LBA Formats Supported: Not Supported 00:12:50.531 Flexible Data Placement Supported: Not Supported 00:12:50.531 00:12:50.531 Controller Memory Buffer Support 00:12:50.531 ================================ 00:12:50.531 Supported: No 00:12:50.531 00:12:50.531 Persistent Memory Region Support 00:12:50.531 ================================ 00:12:50.531 Supported: No 00:12:50.531 00:12:50.531 Admin Command Set Attributes 00:12:50.531 ============================ 00:12:50.531 Security Send/Receive: Not Supported 00:12:50.531 Format NVM: Not Supported 00:12:50.531 Firmware Activate/Download: Not Supported 00:12:50.531 Namespace Management: Not Supported 00:12:50.531 Device Self-Test: Not Supported 00:12:50.531 Directives: Not Supported 00:12:50.531 NVMe-MI: Not Supported 00:12:50.531 Virtualization Management: Not Supported 00:12:50.531 Doorbell Buffer Config: Not Supported 00:12:50.531 Get LBA Status Capability: Not Supported 00:12:50.531 Command & Feature Lockdown Capability: Not Supported 00:12:50.531 Abort Command Limit: 4 00:12:50.531 Async Event Request Limit: 4 00:12:50.531 Number of Firmware Slots: N/A 00:12:50.531 Firmware Slot 1 Read-Only: N/A 00:12:50.531 Firmware Activation Without Reset: N/A 00:12:50.531 Multiple Update Detection Support: N/A 00:12:50.531 Firmware Update Granularity: No Information Provided 00:12:50.531 Per-Namespace SMART Log: No 00:12:50.531 Asymmetric Namespace Access Log Page: Not Supported 00:12:50.531 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:50.531 Command Effects Log Page: Supported 00:12:50.531 Get Log Page Extended Data: Supported 00:12:50.531 Telemetry Log Pages: Not Supported 00:12:50.531 Persistent Event Log Pages: Not Supported 00:12:50.531 Supported Log Pages Log Page: May Support 00:12:50.531 Commands Supported & Effects Log Page: Not Supported 00:12:50.531 Feature Identifiers & Effects Log Page:May Support 00:12:50.531 NVMe-MI Commands & Effects Log Page: May Support 00:12:50.531 Data Area 4 for Telemetry Log: Not Supported 00:12:50.531 Error Log Page Entries Supported: 128 00:12:50.531 Keep Alive: Supported 00:12:50.531 Keep Alive Granularity: 10000 ms 00:12:50.531 00:12:50.531 NVM Command Set Attributes 00:12:50.531 ========================== 00:12:50.531 Submission Queue Entry Size 00:12:50.531 Max: 64 00:12:50.531 Min: 64 00:12:50.531 Completion Queue Entry Size 00:12:50.531 Max: 16 00:12:50.531 Min: 16 00:12:50.531 Number of Namespaces: 32 00:12:50.531 Compare Command: Supported 00:12:50.531 Write Uncorrectable Command: Not Supported 00:12:50.532 Dataset Management Command: Supported 00:12:50.532 Write Zeroes Command: Supported 00:12:50.532 Set Features Save Field: Not Supported 00:12:50.532 Reservations: Not Supported 00:12:50.532 Timestamp: Not Supported 00:12:50.532 Copy: Supported 00:12:50.532 Volatile Write Cache: Present 00:12:50.532 Atomic Write Unit (Normal): 1 00:12:50.532 Atomic Write Unit (PFail): 1 00:12:50.532 Atomic Compare & Write Unit: 1 00:12:50.532 Fused Compare & Write: Supported 00:12:50.532 Scatter-Gather List 00:12:50.532 SGL Command Set: Supported (Dword aligned) 00:12:50.532 SGL Keyed: Not Supported 00:12:50.532 SGL Bit Bucket Descriptor: Not Supported 00:12:50.532 SGL Metadata Pointer: Not Supported 00:12:50.532 Oversized SGL: Not Supported 00:12:50.532 SGL Metadata Address: Not Supported 00:12:50.532 SGL Offset: Not Supported 00:12:50.532 Transport SGL Data Block: Not Supported 00:12:50.532 Replay Protected Memory Block: Not Supported 00:12:50.532 00:12:50.532 Firmware Slot Information 00:12:50.532 ========================= 00:12:50.532 Active slot: 1 00:12:50.532 Slot 1 Firmware Revision: 24.09 00:12:50.532 00:12:50.532 00:12:50.532 Commands Supported and Effects 00:12:50.532 ============================== 00:12:50.532 Admin Commands 00:12:50.532 -------------- 00:12:50.532 Get Log Page (02h): Supported 00:12:50.532 Identify (06h): Supported 00:12:50.532 Abort (08h): Supported 00:12:50.532 Set Features (09h): Supported 00:12:50.532 Get Features (0Ah): Supported 00:12:50.532 Asynchronous Event Request (0Ch): Supported 00:12:50.532 Keep Alive (18h): Supported 00:12:50.532 I/O Commands 00:12:50.532 ------------ 00:12:50.532 Flush (00h): Supported LBA-Change 00:12:50.532 Write (01h): Supported LBA-Change 00:12:50.532 Read (02h): Supported 00:12:50.532 Compare (05h): Supported 00:12:50.532 Write Zeroes (08h): Supported LBA-Change 00:12:50.532 Dataset Management (09h): Supported LBA-Change 00:12:50.532 Copy (19h): Supported LBA-Change 00:12:50.532 00:12:50.532 Error Log 00:12:50.532 ========= 00:12:50.532 00:12:50.532 Arbitration 00:12:50.532 =========== 00:12:50.532 Arbitration Burst: 1 00:12:50.532 00:12:50.532 Power Management 00:12:50.532 ================ 00:12:50.532 Number of Power States: 1 00:12:50.532 Current Power State: Power State #0 00:12:50.532 Power State #0: 00:12:50.532 Max Power: 0.00 W 00:12:50.532 Non-Operational State: Operational 00:12:50.532 Entry Latency: Not Reported 00:12:50.532 Exit Latency: Not Reported 00:12:50.532 Relative Read Throughput: 0 00:12:50.532 Relative Read Latency: 0 00:12:50.532 Relative Write Throughput: 0 00:12:50.532 Relative Write Latency: 0 00:12:50.532 Idle Power: Not Reported 00:12:50.532 Active Power: Not Reported 00:12:50.532 Non-Operational Permissive Mode: Not Supported 00:12:50.532 00:12:50.532 Health Information 00:12:50.532 ================== 00:12:50.532 Critical Warnings: 00:12:50.532 Available Spare Space: OK 00:12:50.532 Temperature: OK 00:12:50.532 Device Reliability: OK 00:12:50.532 Read Only: No 00:12:50.532 Volatile Memory Backup: OK 00:12:50.532 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:50.532 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:50.532 Available Spare: 0% 00:12:50.532 Available Sp[2024-07-15 13:43:16.816257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:50.532 [2024-07-15 13:43:16.824129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:50.532 [2024-07-15 13:43:16.824162] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:50.532 [2024-07-15 13:43:16.824171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.532 [2024-07-15 13:43:16.824178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.532 [2024-07-15 13:43:16.824184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.532 [2024-07-15 13:43:16.824190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.532 [2024-07-15 13:43:16.824236] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:50.532 [2024-07-15 13:43:16.824246] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:50.532 [2024-07-15 13:43:16.825239] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:50.532 [2024-07-15 13:43:16.825287] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:50.532 [2024-07-15 13:43:16.825293] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:50.532 [2024-07-15 13:43:16.826243] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:50.532 [2024-07-15 13:43:16.826255] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:50.532 [2024-07-15 13:43:16.826304] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:50.532 [2024-07-15 13:43:16.829128] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:50.532 are Threshold: 0% 00:12:50.532 Life Percentage Used: 0% 00:12:50.532 Data Units Read: 0 00:12:50.532 Data Units Written: 0 00:12:50.532 Host Read Commands: 0 00:12:50.532 Host Write Commands: 0 00:12:50.532 Controller Busy Time: 0 minutes 00:12:50.532 Power Cycles: 0 00:12:50.532 Power On Hours: 0 hours 00:12:50.532 Unsafe Shutdowns: 0 00:12:50.532 Unrecoverable Media Errors: 0 00:12:50.532 Lifetime Error Log Entries: 0 00:12:50.532 Warning Temperature Time: 0 minutes 00:12:50.532 Critical Temperature Time: 0 minutes 00:12:50.532 00:12:50.532 Number of Queues 00:12:50.532 ================ 00:12:50.532 Number of I/O Submission Queues: 127 00:12:50.532 Number of I/O Completion Queues: 127 00:12:50.532 00:12:50.532 Active Namespaces 00:12:50.532 ================= 00:12:50.532 Namespace ID:1 00:12:50.532 Error Recovery Timeout: Unlimited 00:12:50.532 Command Set Identifier: NVM (00h) 00:12:50.532 Deallocate: Supported 00:12:50.532 Deallocated/Unwritten Error: Not Supported 00:12:50.532 Deallocated Read Value: Unknown 00:12:50.532 Deallocate in Write Zeroes: Not Supported 00:12:50.532 Deallocated Guard Field: 0xFFFF 00:12:50.532 Flush: Supported 00:12:50.532 Reservation: Supported 00:12:50.532 Namespace Sharing Capabilities: Multiple Controllers 00:12:50.532 Size (in LBAs): 131072 (0GiB) 00:12:50.532 Capacity (in LBAs): 131072 (0GiB) 00:12:50.532 Utilization (in LBAs): 131072 (0GiB) 00:12:50.532 NGUID: C8507890D1C54A01936D61AD9F8F8C78 00:12:50.532 UUID: c8507890-d1c5-4a01-936d-61ad9f8f8c78 00:12:50.532 Thin Provisioning: Not Supported 00:12:50.532 Per-NS Atomic Units: Yes 00:12:50.532 Atomic Boundary Size (Normal): 0 00:12:50.532 Atomic Boundary Size (PFail): 0 00:12:50.532 Atomic Boundary Offset: 0 00:12:50.532 Maximum Single Source Range Length: 65535 00:12:50.532 Maximum Copy Length: 65535 00:12:50.532 Maximum Source Range Count: 1 00:12:50.532 NGUID/EUI64 Never Reused: No 00:12:50.532 Namespace Write Protected: No 00:12:50.532 Number of LBA Formats: 1 00:12:50.532 Current LBA Format: LBA Format #00 00:12:50.532 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:50.532 00:12:50.532 13:43:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:50.532 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.532 [2024-07-15 13:43:17.013194] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:55.816 Initializing NVMe Controllers 00:12:55.816 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:55.816 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:55.816 Initialization complete. Launching workers. 00:12:55.816 ======================================================== 00:12:55.816 Latency(us) 00:12:55.816 Device Information : IOPS MiB/s Average min max 00:12:55.816 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39978.60 156.17 3204.06 826.97 6849.24 00:12:55.816 ======================================================== 00:12:55.816 Total : 39978.60 156.17 3204.06 826.97 6849.24 00:12:55.816 00:12:55.816 [2024-07-15 13:43:22.121319] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:55.816 13:43:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:55.816 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.816 [2024-07-15 13:43:22.300901] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:01.123 Initializing NVMe Controllers 00:13:01.123 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:01.123 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:01.123 Initialization complete. Launching workers. 00:13:01.123 ======================================================== 00:13:01.123 Latency(us) 00:13:01.123 Device Information : IOPS MiB/s Average min max 00:13:01.123 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36148.78 141.21 3540.48 1095.69 6832.01 00:13:01.123 ======================================================== 00:13:01.123 Total : 36148.78 141.21 3540.48 1095.69 6832.01 00:13:01.123 00:13:01.123 [2024-07-15 13:43:27.320408] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:01.123 13:43:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:01.123 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.123 [2024-07-15 13:43:27.513560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:06.411 [2024-07-15 13:43:32.662211] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:06.411 Initializing NVMe Controllers 00:13:06.411 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:06.411 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:06.411 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:06.412 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:06.412 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:06.412 Initialization complete. Launching workers. 00:13:06.412 Starting thread on core 2 00:13:06.412 Starting thread on core 3 00:13:06.412 Starting thread on core 1 00:13:06.412 13:43:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:06.412 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.412 [2024-07-15 13:43:32.917604] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:09.707 [2024-07-15 13:43:35.970551] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:09.707 Initializing NVMe Controllers 00:13:09.707 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:09.707 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:09.707 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:09.707 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:09.707 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:09.707 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:09.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:09.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:09.707 Initialization complete. Launching workers. 00:13:09.707 Starting thread on core 1 with urgent priority queue 00:13:09.707 Starting thread on core 2 with urgent priority queue 00:13:09.707 Starting thread on core 3 with urgent priority queue 00:13:09.707 Starting thread on core 0 with urgent priority queue 00:13:09.707 SPDK bdev Controller (SPDK2 ) core 0: 14661.00 IO/s 6.82 secs/100000 ios 00:13:09.707 SPDK bdev Controller (SPDK2 ) core 1: 8240.00 IO/s 12.14 secs/100000 ios 00:13:09.708 SPDK bdev Controller (SPDK2 ) core 2: 15140.33 IO/s 6.60 secs/100000 ios 00:13:09.708 SPDK bdev Controller (SPDK2 ) core 3: 8116.33 IO/s 12.32 secs/100000 ios 00:13:09.708 ======================================================== 00:13:09.708 00:13:09.708 13:43:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:09.708 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.708 [2024-07-15 13:43:36.232617] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:09.968 Initializing NVMe Controllers 00:13:09.968 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:09.968 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:09.968 Namespace ID: 1 size: 0GB 00:13:09.968 Initialization complete. 00:13:09.968 INFO: using host memory buffer for IO 00:13:09.968 Hello world! 00:13:09.968 [2024-07-15 13:43:36.241680] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:09.968 13:43:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:09.968 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.229 [2024-07-15 13:43:36.503373] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:11.170 Initializing NVMe Controllers 00:13:11.170 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.170 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.170 Initialization complete. Launching workers. 00:13:11.170 submit (in ns) avg, min, max = 8417.9, 3894.2, 4030996.7 00:13:11.170 complete (in ns) avg, min, max = 16868.3, 2401.7, 4000571.7 00:13:11.170 00:13:11.170 Submit histogram 00:13:11.171 ================ 00:13:11.171 Range in us Cumulative Count 00:13:11.171 3.893 - 3.920: 2.9580% ( 569) 00:13:11.171 3.920 - 3.947: 10.0541% ( 1365) 00:13:11.171 3.947 - 3.973: 17.8104% ( 1492) 00:13:11.171 3.973 - 4.000: 29.0341% ( 2159) 00:13:11.171 4.000 - 4.027: 39.9303% ( 2096) 00:13:11.171 4.027 - 4.053: 51.1749% ( 2163) 00:13:11.171 4.053 - 4.080: 66.6147% ( 2970) 00:13:11.171 4.080 - 4.107: 80.8588% ( 2740) 00:13:11.171 4.107 - 4.133: 91.2924% ( 2007) 00:13:11.171 4.133 - 4.160: 96.5845% ( 1018) 00:13:11.171 4.160 - 4.187: 98.5288% ( 374) 00:13:11.171 4.187 - 4.213: 99.1890% ( 127) 00:13:11.171 4.213 - 4.240: 99.3710% ( 35) 00:13:11.171 4.240 - 4.267: 99.4697% ( 19) 00:13:11.171 4.267 - 4.293: 99.4905% ( 4) 00:13:11.171 4.320 - 4.347: 99.4957% ( 1) 00:13:11.171 4.400 - 4.427: 99.5009% ( 1) 00:13:11.171 4.480 - 4.507: 99.5061% ( 1) 00:13:11.171 4.773 - 4.800: 99.5113% ( 1) 00:13:11.171 4.853 - 4.880: 99.5165% ( 1) 00:13:11.171 4.880 - 4.907: 99.5217% ( 1) 00:13:11.171 4.987 - 5.013: 99.5269% ( 1) 00:13:11.171 5.227 - 5.253: 99.5321% ( 1) 00:13:11.171 5.387 - 5.413: 99.5373% ( 1) 00:13:11.171 5.547 - 5.573: 99.5425% ( 1) 00:13:11.171 5.600 - 5.627: 99.5477% ( 1) 00:13:11.171 5.867 - 5.893: 99.5529% ( 1) 00:13:11.171 6.027 - 6.053: 99.5633% ( 2) 00:13:11.171 6.053 - 6.080: 99.5685% ( 1) 00:13:11.171 6.080 - 6.107: 99.5789% ( 2) 00:13:11.171 6.160 - 6.187: 99.5841% ( 1) 00:13:11.171 6.293 - 6.320: 99.5945% ( 2) 00:13:11.171 6.373 - 6.400: 99.5997% ( 1) 00:13:11.171 6.400 - 6.427: 99.6049% ( 1) 00:13:11.171 6.427 - 6.453: 99.6153% ( 2) 00:13:11.171 6.453 - 6.480: 99.6205% ( 1) 00:13:11.171 6.480 - 6.507: 99.6257% ( 1) 00:13:11.171 6.507 - 6.533: 99.6309% ( 1) 00:13:11.171 6.533 - 6.560: 99.6361% ( 1) 00:13:11.171 6.560 - 6.587: 99.6413% ( 1) 00:13:11.171 6.613 - 6.640: 99.6465% ( 1) 00:13:11.171 6.693 - 6.720: 99.6621% ( 3) 00:13:11.171 6.747 - 6.773: 99.6725% ( 2) 00:13:11.171 6.773 - 6.800: 99.6777% ( 1) 00:13:11.171 6.827 - 6.880: 99.6985% ( 4) 00:13:11.171 6.933 - 6.987: 99.7089% ( 2) 00:13:11.171 6.987 - 7.040: 99.7141% ( 1) 00:13:11.171 7.040 - 7.093: 99.7245% ( 2) 00:13:11.171 7.093 - 7.147: 99.7401% ( 3) 00:13:11.171 7.147 - 7.200: 99.7505% ( 2) 00:13:11.171 7.200 - 7.253: 99.7557% ( 1) 00:13:11.171 7.253 - 7.307: 99.7609% ( 1) 00:13:11.171 7.307 - 7.360: 99.7661% ( 1) 00:13:11.171 7.360 - 7.413: 99.7765% ( 2) 00:13:11.171 7.413 - 7.467: 99.7973% ( 4) 00:13:11.171 7.467 - 7.520: 99.8025% ( 1) 00:13:11.171 7.520 - 7.573: 99.8232% ( 4) 00:13:11.171 7.573 - 7.627: 99.8336% ( 2) 00:13:11.171 7.627 - 7.680: 99.8544% ( 4) 00:13:11.171 7.733 - 7.787: 99.8596% ( 1) 00:13:11.171 7.840 - 7.893: 99.8700% ( 2) 00:13:11.171 7.893 - 7.947: 99.8752% ( 1) 00:13:11.171 8.320 - 8.373: 99.8804% ( 1) 00:13:11.171 8.693 - 8.747: 99.8856% ( 1) 00:13:11.171 13.120 - 13.173: 99.8908% ( 1) 00:13:11.171 3986.773 - 4014.080: 99.9896% ( 19) 00:13:11.171 4014.080 - 4041.387: 100.0000% ( 2) 00:13:11.171 00:13:11.171 Complete histogram 00:13:11.171 ================== 00:13:11.171 Range in us Cumulative Count 00:13:11.171 2.400 - 2.413: 0.0104% ( 2) 00:13:11.171 2.413 - 2.427: 0.9877% ( 188) 00:13:11.171 2.427 - 2.440: 1.1281% ( 27) 00:13:11.171 2.440 - 2.453: 1.2996% ( 33) 00:13:11.171 2.453 - 2.467: 1.3412% ( 8) 00:13:11.171 2.467 - 2.480: 1.3568% ( 3) 00:13:11.171 2.480 - 2.493: 43.4186% ( 8091) 00:13:11.171 2.493 - 2.507: 57.8811% ( 2782) 00:13:11.171 2.507 - 2.520: 69.9990% ( 2331) 00:13:11.171 2.520 - 2.533: 78.6286% ( 1660) 00:13:11.171 2.533 - [2024-07-15 13:43:37.598781] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:11.171 2.547: 81.2643% ( 507) 00:13:11.171 2.547 - 2.560: 84.7214% ( 665) 00:13:11.171 2.560 - 2.573: 89.9147% ( 999) 00:13:11.171 2.573 - 2.587: 94.5935% ( 900) 00:13:11.171 2.587 - 2.600: 97.0836% ( 479) 00:13:11.171 2.600 - 2.613: 98.5704% ( 286) 00:13:11.171 2.613 - 2.627: 99.2098% ( 123) 00:13:11.171 2.627 - 2.640: 99.3918% ( 35) 00:13:11.171 2.640 - 2.653: 99.4178% ( 5) 00:13:11.171 2.653 - 2.667: 99.4230% ( 1) 00:13:11.171 2.667 - 2.680: 99.4282% ( 1) 00:13:11.171 3.240 - 3.253: 99.4334% ( 1) 00:13:11.171 4.640 - 4.667: 99.4386% ( 1) 00:13:11.171 4.667 - 4.693: 99.4438% ( 1) 00:13:11.171 4.907 - 4.933: 99.4489% ( 1) 00:13:11.171 4.960 - 4.987: 99.4593% ( 2) 00:13:11.171 5.067 - 5.093: 99.4645% ( 1) 00:13:11.171 5.093 - 5.120: 99.4697% ( 1) 00:13:11.171 5.120 - 5.147: 99.4749% ( 1) 00:13:11.171 5.147 - 5.173: 99.4801% ( 1) 00:13:11.171 5.227 - 5.253: 99.4853% ( 1) 00:13:11.171 5.253 - 5.280: 99.4957% ( 2) 00:13:11.171 5.333 - 5.360: 99.5061% ( 2) 00:13:11.171 5.360 - 5.387: 99.5113% ( 1) 00:13:11.171 5.413 - 5.440: 99.5165% ( 1) 00:13:11.171 5.467 - 5.493: 99.5217% ( 1) 00:13:11.171 5.520 - 5.547: 99.5269% ( 1) 00:13:11.171 5.547 - 5.573: 99.5425% ( 3) 00:13:11.171 5.600 - 5.627: 99.5633% ( 4) 00:13:11.171 5.653 - 5.680: 99.5841% ( 4) 00:13:11.171 5.733 - 5.760: 99.5893% ( 1) 00:13:11.171 5.813 - 5.840: 99.5945% ( 1) 00:13:11.171 5.920 - 5.947: 99.5997% ( 1) 00:13:11.171 6.027 - 6.053: 99.6049% ( 1) 00:13:11.171 6.560 - 6.587: 99.6101% ( 1) 00:13:11.171 6.987 - 7.040: 99.6153% ( 1) 00:13:11.171 11.627 - 11.680: 99.6205% ( 1) 00:13:11.171 13.440 - 13.493: 99.6257% ( 1) 00:13:11.171 44.373 - 44.587: 99.6309% ( 1) 00:13:11.171 157.867 - 158.720: 99.6361% ( 1) 00:13:11.171 187.733 - 188.587: 99.6413% ( 1) 00:13:11.171 3986.773 - 4014.080: 100.0000% ( 69) 00:13:11.171 00:13:11.171 13:43:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:11.171 13:43:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:11.171 13:43:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:11.171 13:43:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:11.171 13:43:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:11.432 [ 00:13:11.432 { 00:13:11.432 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:11.432 "subtype": "Discovery", 00:13:11.432 "listen_addresses": [], 00:13:11.432 "allow_any_host": true, 00:13:11.432 "hosts": [] 00:13:11.432 }, 00:13:11.432 { 00:13:11.432 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:11.432 "subtype": "NVMe", 00:13:11.432 "listen_addresses": [ 00:13:11.432 { 00:13:11.432 "trtype": "VFIOUSER", 00:13:11.432 "adrfam": "IPv4", 00:13:11.432 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:11.432 "trsvcid": "0" 00:13:11.432 } 00:13:11.432 ], 00:13:11.432 "allow_any_host": true, 00:13:11.432 "hosts": [], 00:13:11.432 "serial_number": "SPDK1", 00:13:11.432 "model_number": "SPDK bdev Controller", 00:13:11.432 "max_namespaces": 32, 00:13:11.432 "min_cntlid": 1, 00:13:11.432 "max_cntlid": 65519, 00:13:11.432 "namespaces": [ 00:13:11.432 { 00:13:11.432 "nsid": 1, 00:13:11.432 "bdev_name": "Malloc1", 00:13:11.432 "name": "Malloc1", 00:13:11.432 "nguid": "2749E5AAA1CE4D2F9B34702B195282EB", 00:13:11.432 "uuid": "2749e5aa-a1ce-4d2f-9b34-702b195282eb" 00:13:11.432 }, 00:13:11.432 { 00:13:11.432 "nsid": 2, 00:13:11.432 "bdev_name": "Malloc3", 00:13:11.432 "name": "Malloc3", 00:13:11.432 "nguid": "C96D39A32A2749D5B4097E175884A72B", 00:13:11.432 "uuid": "c96d39a3-2a27-49d5-b409-7e175884a72b" 00:13:11.432 } 00:13:11.432 ] 00:13:11.432 }, 00:13:11.432 { 00:13:11.432 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:11.432 "subtype": "NVMe", 00:13:11.432 "listen_addresses": [ 00:13:11.432 { 00:13:11.432 "trtype": "VFIOUSER", 00:13:11.432 "adrfam": "IPv4", 00:13:11.432 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:11.432 "trsvcid": "0" 00:13:11.432 } 00:13:11.432 ], 00:13:11.432 "allow_any_host": true, 00:13:11.432 "hosts": [], 00:13:11.432 "serial_number": "SPDK2", 00:13:11.432 "model_number": "SPDK bdev Controller", 00:13:11.432 "max_namespaces": 32, 00:13:11.432 "min_cntlid": 1, 00:13:11.432 "max_cntlid": 65519, 00:13:11.432 "namespaces": [ 00:13:11.432 { 00:13:11.432 "nsid": 1, 00:13:11.432 "bdev_name": "Malloc2", 00:13:11.432 "name": "Malloc2", 00:13:11.432 "nguid": "C8507890D1C54A01936D61AD9F8F8C78", 00:13:11.432 "uuid": "c8507890-d1c5-4a01-936d-61ad9f8f8c78" 00:13:11.432 } 00:13:11.432 ] 00:13:11.432 } 00:13:11.432 ] 00:13:11.432 13:43:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:11.432 13:43:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:11.432 13:43:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1002198 00:13:11.432 13:43:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:11.432 13:43:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:11.432 13:43:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:11.432 13:43:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:11.432 13:43:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:11.432 13:43:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:11.432 13:43:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:11.432 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.693 [2024-07-15 13:43:37.973995] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:11.693 Malloc4 00:13:11.693 13:43:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:11.693 [2024-07-15 13:43:38.137132] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:11.693 13:43:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:11.693 Asynchronous Event Request test 00:13:11.693 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.693 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.693 Registering asynchronous event callbacks... 00:13:11.693 Starting namespace attribute notice tests for all controllers... 00:13:11.693 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:11.693 aer_cb - Changed Namespace 00:13:11.693 Cleaning up... 00:13:11.954 [ 00:13:11.954 { 00:13:11.954 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:11.954 "subtype": "Discovery", 00:13:11.954 "listen_addresses": [], 00:13:11.954 "allow_any_host": true, 00:13:11.954 "hosts": [] 00:13:11.954 }, 00:13:11.954 { 00:13:11.954 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:11.954 "subtype": "NVMe", 00:13:11.954 "listen_addresses": [ 00:13:11.954 { 00:13:11.954 "trtype": "VFIOUSER", 00:13:11.954 "adrfam": "IPv4", 00:13:11.954 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:11.954 "trsvcid": "0" 00:13:11.954 } 00:13:11.954 ], 00:13:11.954 "allow_any_host": true, 00:13:11.954 "hosts": [], 00:13:11.954 "serial_number": "SPDK1", 00:13:11.954 "model_number": "SPDK bdev Controller", 00:13:11.954 "max_namespaces": 32, 00:13:11.954 "min_cntlid": 1, 00:13:11.954 "max_cntlid": 65519, 00:13:11.954 "namespaces": [ 00:13:11.954 { 00:13:11.954 "nsid": 1, 00:13:11.954 "bdev_name": "Malloc1", 00:13:11.954 "name": "Malloc1", 00:13:11.954 "nguid": "2749E5AAA1CE4D2F9B34702B195282EB", 00:13:11.954 "uuid": "2749e5aa-a1ce-4d2f-9b34-702b195282eb" 00:13:11.954 }, 00:13:11.954 { 00:13:11.954 "nsid": 2, 00:13:11.954 "bdev_name": "Malloc3", 00:13:11.954 "name": "Malloc3", 00:13:11.954 "nguid": "C96D39A32A2749D5B4097E175884A72B", 00:13:11.954 "uuid": "c96d39a3-2a27-49d5-b409-7e175884a72b" 00:13:11.954 } 00:13:11.954 ] 00:13:11.954 }, 00:13:11.954 { 00:13:11.954 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:11.954 "subtype": "NVMe", 00:13:11.954 "listen_addresses": [ 00:13:11.954 { 00:13:11.954 "trtype": "VFIOUSER", 00:13:11.954 "adrfam": "IPv4", 00:13:11.954 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:11.954 "trsvcid": "0" 00:13:11.954 } 00:13:11.954 ], 00:13:11.954 "allow_any_host": true, 00:13:11.954 "hosts": [], 00:13:11.954 "serial_number": "SPDK2", 00:13:11.954 "model_number": "SPDK bdev Controller", 00:13:11.954 "max_namespaces": 32, 00:13:11.954 "min_cntlid": 1, 00:13:11.954 "max_cntlid": 65519, 00:13:11.954 "namespaces": [ 00:13:11.954 { 00:13:11.954 "nsid": 1, 00:13:11.954 "bdev_name": "Malloc2", 00:13:11.954 "name": "Malloc2", 00:13:11.954 "nguid": "C8507890D1C54A01936D61AD9F8F8C78", 00:13:11.954 "uuid": "c8507890-d1c5-4a01-936d-61ad9f8f8c78" 00:13:11.954 }, 00:13:11.954 { 00:13:11.954 "nsid": 2, 00:13:11.954 "bdev_name": "Malloc4", 00:13:11.954 "name": "Malloc4", 00:13:11.954 "nguid": "3866712E5DE24BD9B1B00825DA5F6251", 00:13:11.954 "uuid": "3866712e-5de2-4bd9-b1b0-0825da5f6251" 00:13:11.954 } 00:13:11.954 ] 00:13:11.954 } 00:13:11.954 ] 00:13:11.954 13:43:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1002198 00:13:11.954 13:43:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:11.954 13:43:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 993000 00:13:11.954 13:43:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 993000 ']' 00:13:11.954 13:43:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 993000 00:13:11.954 13:43:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:11.954 13:43:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:11.954 13:43:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 993000 00:13:11.954 13:43:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:11.954 13:43:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:11.954 13:43:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 993000' 00:13:11.954 killing process with pid 993000 00:13:11.954 13:43:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 993000 00:13:11.954 13:43:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 993000 00:13:12.215 13:43:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:12.215 13:43:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:12.215 13:43:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:12.215 13:43:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:12.215 13:43:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:12.215 13:43:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1002305 00:13:12.215 13:43:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1002305' 00:13:12.215 Process pid: 1002305 00:13:12.215 13:43:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:12.215 13:43:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:12.215 13:43:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1002305 00:13:12.215 13:43:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1002305 ']' 00:13:12.215 13:43:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.215 13:43:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.215 13:43:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.215 13:43:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.215 13:43:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:12.215 [2024-07-15 13:43:38.611953] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:12.215 [2024-07-15 13:43:38.612896] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:12.215 [2024-07-15 13:43:38.612938] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.215 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.215 [2024-07-15 13:43:38.674014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:12.476 [2024-07-15 13:43:38.741151] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.476 [2024-07-15 13:43:38.741185] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.476 [2024-07-15 13:43:38.741193] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.476 [2024-07-15 13:43:38.741199] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.476 [2024-07-15 13:43:38.741205] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.476 [2024-07-15 13:43:38.741270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.476 [2024-07-15 13:43:38.741385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.476 [2024-07-15 13:43:38.741542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.476 [2024-07-15 13:43:38.741543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:12.476 [2024-07-15 13:43:38.810311] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:12.476 [2024-07-15 13:43:38.810443] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:12.476 [2024-07-15 13:43:38.811520] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:12.476 [2024-07-15 13:43:38.811919] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:12.476 [2024-07-15 13:43:38.812007] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:13.047 13:43:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:13.047 13:43:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:13.047 13:43:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:13.988 13:43:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:14.248 13:43:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:14.248 13:43:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:14.248 13:43:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:14.248 13:43:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:14.248 13:43:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:14.248 Malloc1 00:13:14.248 13:43:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:14.508 13:43:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:14.781 13:43:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:14.781 13:43:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:14.781 13:43:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:14.781 13:43:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:15.093 Malloc2 00:13:15.093 13:43:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:15.093 13:43:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:15.352 13:43:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:15.612 13:43:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:15.612 13:43:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1002305 00:13:15.612 13:43:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1002305 ']' 00:13:15.612 13:43:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1002305 00:13:15.612 13:43:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:15.612 13:43:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:15.612 13:43:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1002305 00:13:15.612 13:43:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:15.612 13:43:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:15.612 13:43:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1002305' 00:13:15.612 killing process with pid 1002305 00:13:15.612 13:43:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1002305 00:13:15.612 13:43:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1002305 00:13:15.612 13:43:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:15.612 13:43:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:15.612 00:13:15.612 real 0m51.337s 00:13:15.612 user 3m23.516s 00:13:15.612 sys 0m2.933s 00:13:15.612 13:43:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:15.612 13:43:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:15.612 ************************************ 00:13:15.612 END TEST nvmf_vfio_user 00:13:15.612 ************************************ 00:13:15.874 13:43:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:15.874 13:43:42 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:15.874 13:43:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:15.874 13:43:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.874 13:43:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:15.874 ************************************ 00:13:15.874 START TEST nvmf_vfio_user_nvme_compliance 00:13:15.874 ************************************ 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:15.874 * Looking for test storage... 00:13:15.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.874 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1003225 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1003225' 00:13:15.875 Process pid: 1003225 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1003225 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1003225 ']' 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.875 13:43:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:15.875 [2024-07-15 13:43:42.382914] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:15.875 [2024-07-15 13:43:42.382991] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.136 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.136 [2024-07-15 13:43:42.449435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:16.136 [2024-07-15 13:43:42.522086] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.136 [2024-07-15 13:43:42.522128] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.136 [2024-07-15 13:43:42.522136] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.136 [2024-07-15 13:43:42.522142] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.136 [2024-07-15 13:43:42.522148] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.136 [2024-07-15 13:43:42.522246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.136 [2024-07-15 13:43:42.522460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.136 [2024-07-15 13:43:42.522463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.706 13:43:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.706 13:43:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:16.706 13:43:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:17.644 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:17.905 malloc0 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.905 13:43:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:17.905 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.905 00:13:17.905 00:13:17.905 CUnit - A unit testing framework for C - Version 2.1-3 00:13:17.905 http://cunit.sourceforge.net/ 00:13:17.905 00:13:17.905 00:13:17.905 Suite: nvme_compliance 00:13:17.905 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 13:43:44.413576] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.905 [2024-07-15 13:43:44.414929] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:17.905 [2024-07-15 13:43:44.414941] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:17.905 [2024-07-15 13:43:44.414945] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:17.905 [2024-07-15 13:43:44.416593] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.166 passed 00:13:18.166 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 13:43:44.509187] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.166 [2024-07-15 13:43:44.512208] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.166 passed 00:13:18.166 Test: admin_identify_ns ...[2024-07-15 13:43:44.607364] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.166 [2024-07-15 13:43:44.671134] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:18.166 [2024-07-15 13:43:44.679131] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:18.426 [2024-07-15 13:43:44.700241] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.426 passed 00:13:18.426 Test: admin_get_features_mandatory_features ...[2024-07-15 13:43:44.790832] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.426 [2024-07-15 13:43:44.793847] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.426 passed 00:13:18.426 Test: admin_get_features_optional_features ...[2024-07-15 13:43:44.887395] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.426 [2024-07-15 13:43:44.890414] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.426 passed 00:13:18.685 Test: admin_set_features_number_of_queues ...[2024-07-15 13:43:44.984551] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.685 [2024-07-15 13:43:45.088309] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.685 passed 00:13:18.685 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 13:43:45.183270] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.685 [2024-07-15 13:43:45.186290] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.944 passed 00:13:18.944 Test: admin_get_log_page_with_lpo ...[2024-07-15 13:43:45.279407] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.944 [2024-07-15 13:43:45.347134] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:18.944 [2024-07-15 13:43:45.360193] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.944 passed 00:13:18.944 Test: fabric_property_get ...[2024-07-15 13:43:45.453256] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.944 [2024-07-15 13:43:45.454494] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:18.944 [2024-07-15 13:43:45.457284] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:19.204 passed 00:13:19.204 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 13:43:45.549787] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:19.204 [2024-07-15 13:43:45.551052] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:19.204 [2024-07-15 13:43:45.552810] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:19.204 passed 00:13:19.204 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 13:43:45.645939] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:19.467 [2024-07-15 13:43:45.730133] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:19.467 [2024-07-15 13:43:45.746134] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:19.467 [2024-07-15 13:43:45.751217] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:19.467 passed 00:13:19.467 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 13:43:45.843824] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:19.467 [2024-07-15 13:43:45.845071] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:19.467 [2024-07-15 13:43:45.846843] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:19.467 passed 00:13:19.467 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 13:43:45.939379] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:19.728 [2024-07-15 13:43:46.015129] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:19.728 [2024-07-15 13:43:46.039131] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:19.728 [2024-07-15 13:43:46.044217] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:19.728 passed 00:13:19.728 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 13:43:46.138659] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:19.728 [2024-07-15 13:43:46.139919] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:19.728 [2024-07-15 13:43:46.139939] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:19.728 [2024-07-15 13:43:46.141685] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:19.728 passed 00:13:19.728 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 13:43:46.234367] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:19.989 [2024-07-15 13:43:46.326129] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:19.989 [2024-07-15 13:43:46.334126] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:19.989 [2024-07-15 13:43:46.342131] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:19.989 [2024-07-15 13:43:46.350137] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:19.989 [2024-07-15 13:43:46.379218] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:19.989 passed 00:13:19.989 Test: admin_create_io_sq_verify_pc ...[2024-07-15 13:43:46.473203] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:19.989 [2024-07-15 13:43:46.493139] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:19.989 [2024-07-15 13:43:46.510372] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.248 passed 00:13:20.248 Test: admin_create_io_qp_max_qps ...[2024-07-15 13:43:46.601909] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.188 [2024-07-15 13:43:47.695132] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:21.759 [2024-07-15 13:43:48.082565] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.759 passed 00:13:21.759 Test: admin_create_io_sq_shared_cq ...[2024-07-15 13:43:48.175768] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.019 [2024-07-15 13:43:48.308133] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:22.019 [2024-07-15 13:43:48.345185] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.019 passed 00:13:22.019 00:13:22.019 Run Summary: Type Total Ran Passed Failed Inactive 00:13:22.019 suites 1 1 n/a 0 0 00:13:22.019 tests 18 18 18 0 0 00:13:22.019 asserts 360 360 360 0 n/a 00:13:22.019 00:13:22.019 Elapsed time = 1.647 seconds 00:13:22.019 13:43:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1003225 00:13:22.019 13:43:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1003225 ']' 00:13:22.019 13:43:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1003225 00:13:22.019 13:43:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:22.019 13:43:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:22.019 13:43:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1003225 00:13:22.019 13:43:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:22.019 13:43:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:22.019 13:43:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1003225' 00:13:22.019 killing process with pid 1003225 00:13:22.019 13:43:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1003225 00:13:22.019 13:43:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1003225 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:22.279 00:13:22.279 real 0m6.406s 00:13:22.279 user 0m18.338s 00:13:22.279 sys 0m0.466s 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:22.279 ************************************ 00:13:22.279 END TEST nvmf_vfio_user_nvme_compliance 00:13:22.279 ************************************ 00:13:22.279 13:43:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:22.279 13:43:48 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:22.279 13:43:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:22.279 13:43:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:22.279 13:43:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:22.279 ************************************ 00:13:22.279 START TEST nvmf_vfio_user_fuzz 00:13:22.279 ************************************ 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:22.279 * Looking for test storage... 00:13:22.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:22.279 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:22.280 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:22.280 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:22.280 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:22.280 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:22.280 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:22.539 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:22.539 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1004448 00:13:22.539 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1004448' 00:13:22.539 Process pid: 1004448 00:13:22.539 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:22.539 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:22.539 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1004448 00:13:22.539 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1004448 ']' 00:13:22.539 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.539 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.539 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.539 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.539 13:43:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:23.109 13:43:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.109 13:43:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:23.109 13:43:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:24.487 malloc0 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:24.487 13:43:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:56.590 Fuzzing completed. Shutting down the fuzz application 00:13:56.590 00:13:56.590 Dumping successful admin opcodes: 00:13:56.590 8, 9, 10, 24, 00:13:56.590 Dumping successful io opcodes: 00:13:56.590 0, 00:13:56.590 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1133488, total successful commands: 4462, random_seed: 1471737408 00:13:56.590 NS: 0x200003a1ef00 admin qp, Total commands completed: 142653, total successful commands: 1160, random_seed: 2573229632 00:13:56.590 13:44:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:56.590 13:44:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.590 13:44:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:56.590 13:44:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.590 13:44:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1004448 00:13:56.590 13:44:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1004448 ']' 00:13:56.590 13:44:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1004448 00:13:56.590 13:44:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:56.590 13:44:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:56.590 13:44:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1004448 00:13:56.590 13:44:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:56.590 13:44:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:56.591 13:44:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1004448' 00:13:56.591 killing process with pid 1004448 00:13:56.591 13:44:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1004448 00:13:56.591 13:44:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1004448 00:13:56.591 13:44:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:56.591 13:44:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:56.591 00:13:56.591 real 0m33.692s 00:13:56.591 user 0m38.151s 00:13:56.591 sys 0m25.635s 00:13:56.591 13:44:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:56.591 13:44:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:56.591 ************************************ 00:13:56.591 END TEST nvmf_vfio_user_fuzz 00:13:56.591 ************************************ 00:13:56.591 13:44:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:56.591 13:44:22 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:56.591 13:44:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:56.591 13:44:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:56.591 13:44:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:56.591 ************************************ 00:13:56.591 START TEST nvmf_host_management 00:13:56.591 ************************************ 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:56.591 * Looking for test storage... 00:13:56.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:56.591 13:44:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:03.224 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:03.225 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:03.225 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:03.225 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:03.225 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:03.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:14:03.225 00:14:03.225 --- 10.0.0.2 ping statistics --- 00:14:03.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.225 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:03.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.470 ms 00:14:03.225 00:14:03.225 --- 10.0.0.1 ping statistics --- 00:14:03.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.225 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:03.225 13:44:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.486 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1014698 00:14:03.486 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1014698 00:14:03.486 13:44:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:03.486 13:44:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1014698 ']' 00:14:03.486 13:44:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.486 13:44:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.486 13:44:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.486 13:44:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.486 13:44:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.486 [2024-07-15 13:44:29.814743] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:03.486 [2024-07-15 13:44:29.814810] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.486 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.486 [2024-07-15 13:44:29.902709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.486 [2024-07-15 13:44:29.999353] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.486 [2024-07-15 13:44:29.999417] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.486 [2024-07-15 13:44:29.999429] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.486 [2024-07-15 13:44:29.999437] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.486 [2024-07-15 13:44:29.999443] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.486 [2024-07-15 13:44:29.999580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.486 [2024-07-15 13:44:29.999752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.486 [2024-07-15 13:44:29.999886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.486 [2024-07-15 13:44:29.999888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:04.425 [2024-07-15 13:44:30.626664] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:04.425 Malloc0 00:14:04.425 [2024-07-15 13:44:30.689776] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1015054 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1015054 /var/tmp/bdevperf.sock 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1015054 ']' 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:04.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:04.425 { 00:14:04.425 "params": { 00:14:04.425 "name": "Nvme$subsystem", 00:14:04.425 "trtype": "$TEST_TRANSPORT", 00:14:04.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:04.425 "adrfam": "ipv4", 00:14:04.425 "trsvcid": "$NVMF_PORT", 00:14:04.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:04.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:04.425 "hdgst": ${hdgst:-false}, 00:14:04.425 "ddgst": ${ddgst:-false} 00:14:04.425 }, 00:14:04.425 "method": "bdev_nvme_attach_controller" 00:14:04.425 } 00:14:04.425 EOF 00:14:04.425 )") 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:04.425 13:44:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:04.425 "params": { 00:14:04.425 "name": "Nvme0", 00:14:04.425 "trtype": "tcp", 00:14:04.425 "traddr": "10.0.0.2", 00:14:04.425 "adrfam": "ipv4", 00:14:04.425 "trsvcid": "4420", 00:14:04.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:04.426 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:04.426 "hdgst": false, 00:14:04.426 "ddgst": false 00:14:04.426 }, 00:14:04.426 "method": "bdev_nvme_attach_controller" 00:14:04.426 }' 00:14:04.426 [2024-07-15 13:44:30.788798] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:04.426 [2024-07-15 13:44:30.788850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1015054 ] 00:14:04.426 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.426 [2024-07-15 13:44:30.847650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.426 [2024-07-15 13:44:30.912552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.685 Running I/O for 10 seconds... 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.257 13:44:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:05.257 [2024-07-15 13:44:31.640788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1239e40 is same with the state(5) to be set 00:14:05.257 [2024-07-15 13:44:31.640862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1239e40 is same with the state(5) to be set 00:14:05.257 [2024-07-15 13:44:31.641709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.641749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.641766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.641776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.641786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.641796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.641807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.641815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.641825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.641834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.641845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.641853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.641863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.641872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.641883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.641891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.641901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.641914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.641925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.641933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.641943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.641952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.641962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.641970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.641981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.641990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.642000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.642009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.642018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.642027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.642038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.642046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.642056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.642064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.642074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.642082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.642093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.642102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.642112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.642120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.642135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.642143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.642155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.642164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.642174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.642182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.642193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.642202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.642213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.642221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.642231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.642241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.642251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.642259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.642269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.642277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.642288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.257 [2024-07-15 13:44:31.642296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.257 [2024-07-15 13:44:31.642306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.642948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.258 [2024-07-15 13:44:31.642956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.643011] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20154f0 was disconnected and freed. reset controller. 00:14:05.258 [2024-07-15 13:44:31.644217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:05.258 13:44:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.258 task offset: 67840 on job bdev=Nvme0n1 fails 00:14:05.258 00:14:05.258 Latency(us) 00:14:05.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.258 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:05.258 Job: Nvme0n1 ended in about 0.47 seconds with error 00:14:05.258 Verification LBA range: start 0x0 length 0x400 00:14:05.258 Nvme0n1 : 0.47 1097.70 68.61 137.21 0.00 50490.82 1829.55 44346.03 00:14:05.258 =================================================================================================================== 00:14:05.258 Total : 1097.70 68.61 137.21 0.00 50490.82 1829.55 44346.03 00:14:05.258 13:44:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:05.258 [2024-07-15 13:44:31.646217] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:05.258 [2024-07-15 13:44:31.646239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c043b0 (9): Bad file descriptor 00:14:05.258 13:44:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.258 13:44:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:05.258 [2024-07-15 13:44:31.652564] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:14:05.258 [2024-07-15 13:44:31.652666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:05.258 [2024-07-15 13:44:31.652688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.258 [2024-07-15 13:44:31.652703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:14:05.259 [2024-07-15 13:44:31.652718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:14:05.259 [2024-07-15 13:44:31.652725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:14:05.259 [2024-07-15 13:44:31.652732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c043b0 00:14:05.259 [2024-07-15 13:44:31.652751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c043b0 (9): Bad file descriptor 00:14:05.259 [2024-07-15 13:44:31.652764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:14:05.259 [2024-07-15 13:44:31.652771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:14:05.259 [2024-07-15 13:44:31.652780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:14:05.259 [2024-07-15 13:44:31.652792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:05.259 13:44:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.259 13:44:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:06.200 13:44:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1015054 00:14:06.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1015054) - No such process 00:14:06.200 13:44:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:06.200 13:44:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:06.200 13:44:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:06.200 13:44:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:06.200 13:44:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:06.200 13:44:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:06.200 13:44:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:06.200 13:44:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:06.200 { 00:14:06.200 "params": { 00:14:06.200 "name": "Nvme$subsystem", 00:14:06.200 "trtype": "$TEST_TRANSPORT", 00:14:06.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:06.201 "adrfam": "ipv4", 00:14:06.201 "trsvcid": "$NVMF_PORT", 00:14:06.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:06.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:06.201 "hdgst": ${hdgst:-false}, 00:14:06.201 "ddgst": ${ddgst:-false} 00:14:06.201 }, 00:14:06.201 "method": "bdev_nvme_attach_controller" 00:14:06.201 } 00:14:06.201 EOF 00:14:06.201 )") 00:14:06.201 13:44:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:06.201 13:44:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:06.201 13:44:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:06.201 13:44:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:06.201 "params": { 00:14:06.201 "name": "Nvme0", 00:14:06.201 "trtype": "tcp", 00:14:06.201 "traddr": "10.0.0.2", 00:14:06.201 "adrfam": "ipv4", 00:14:06.201 "trsvcid": "4420", 00:14:06.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:06.201 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:06.201 "hdgst": false, 00:14:06.201 "ddgst": false 00:14:06.201 }, 00:14:06.201 "method": "bdev_nvme_attach_controller" 00:14:06.201 }' 00:14:06.201 [2024-07-15 13:44:32.715353] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:06.201 [2024-07-15 13:44:32.715409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1015405 ] 00:14:06.462 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.462 [2024-07-15 13:44:32.774267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.462 [2024-07-15 13:44:32.836538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.723 Running I/O for 1 seconds... 00:14:07.661 00:14:07.661 Latency(us) 00:14:07.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.661 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:07.661 Verification LBA range: start 0x0 length 0x400 00:14:07.661 Nvme0n1 : 1.01 1139.80 71.24 0.00 0.00 55303.69 2252.80 44127.57 00:14:07.661 =================================================================================================================== 00:14:07.661 Total : 1139.80 71.24 0.00 0.00 55303.69 2252.80 44127.57 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:07.921 rmmod nvme_tcp 00:14:07.921 rmmod nvme_fabrics 00:14:07.921 rmmod nvme_keyring 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1014698 ']' 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1014698 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1014698 ']' 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1014698 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1014698 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1014698' 00:14:07.921 killing process with pid 1014698 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1014698 00:14:07.921 13:44:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1014698 00:14:08.181 [2024-07-15 13:44:34.526368] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:08.181 13:44:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:08.181 13:44:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:08.181 13:44:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:08.181 13:44:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:08.181 13:44:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:08.181 13:44:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.181 13:44:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:08.181 13:44:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.093 13:44:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:10.354 13:44:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:10.354 00:14:10.354 real 0m14.188s 00:14:10.354 user 0m22.739s 00:14:10.354 sys 0m6.349s 00:14:10.354 13:44:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:10.354 13:44:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:10.354 ************************************ 00:14:10.354 END TEST nvmf_host_management 00:14:10.354 ************************************ 00:14:10.354 13:44:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:10.354 13:44:36 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:10.354 13:44:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:10.354 13:44:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.354 13:44:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:10.354 ************************************ 00:14:10.354 START TEST nvmf_lvol 00:14:10.354 ************************************ 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:10.354 * Looking for test storage... 00:14:10.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:10.354 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:10.355 13:44:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:18.493 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:18.493 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:18.493 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:18.493 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:18.493 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:18.493 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:18.493 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:18.494 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:18.494 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:18.494 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:18.494 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:18.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:14:18.494 00:14:18.494 --- 10.0.0.2 ping statistics --- 00:14:18.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.494 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:18.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.377 ms 00:14:18.494 00:14:18.494 --- 10.0.0.1 ping statistics --- 00:14:18.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.494 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1019834 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1019834 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1019834 ']' 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.494 13:44:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:18.494 [2024-07-15 13:44:43.986418] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:18.494 [2024-07-15 13:44:43.986480] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.494 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.494 [2024-07-15 13:44:44.056737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:18.494 [2024-07-15 13:44:44.132104] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.494 [2024-07-15 13:44:44.132150] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.494 [2024-07-15 13:44:44.132159] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.495 [2024-07-15 13:44:44.132165] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.495 [2024-07-15 13:44:44.132171] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.495 [2024-07-15 13:44:44.132246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.495 [2024-07-15 13:44:44.132368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.495 [2024-07-15 13:44:44.132371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.495 13:44:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.495 13:44:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:18.495 13:44:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:18.495 13:44:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:18.495 13:44:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:18.495 13:44:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.495 13:44:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:18.495 [2024-07-15 13:44:44.952655] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.495 13:44:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:18.755 13:44:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:18.755 13:44:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:19.015 13:44:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:19.015 13:44:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:19.015 13:44:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:19.276 13:44:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=33d40ed8-0e18-4a29-8f3b-b9f682b7a47b 00:14:19.276 13:44:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 33d40ed8-0e18-4a29-8f3b-b9f682b7a47b lvol 20 00:14:19.537 13:44:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=014da7ab-e98f-4606-8e20-788fa216487f 00:14:19.537 13:44:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:19.537 13:44:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 014da7ab-e98f-4606-8e20-788fa216487f 00:14:19.797 13:44:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:19.797 [2024-07-15 13:44:46.310577] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.058 13:44:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:20.058 13:44:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1020444 00:14:20.058 13:44:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:20.058 13:44:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:20.058 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.998 13:44:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 014da7ab-e98f-4606-8e20-788fa216487f MY_SNAPSHOT 00:14:21.258 13:44:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6d412ba8-b16d-4cf9-8791-c89b385714bc 00:14:21.258 13:44:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 014da7ab-e98f-4606-8e20-788fa216487f 30 00:14:21.542 13:44:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6d412ba8-b16d-4cf9-8791-c89b385714bc MY_CLONE 00:14:21.808 13:44:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=23c74283-48e7-4d06-b489-2b6789742124 00:14:21.808 13:44:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 23c74283-48e7-4d06-b489-2b6789742124 00:14:22.069 13:44:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1020444 00:14:32.068 Initializing NVMe Controllers 00:14:32.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:32.068 Controller IO queue size 128, less than required. 00:14:32.068 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:32.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:32.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:32.068 Initialization complete. Launching workers. 00:14:32.068 ======================================================== 00:14:32.068 Latency(us) 00:14:32.068 Device Information : IOPS MiB/s Average min max 00:14:32.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12567.90 49.09 10188.18 1490.35 67589.15 00:14:32.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 18181.30 71.02 7040.64 685.20 49662.86 00:14:32.068 ======================================================== 00:14:32.068 Total : 30749.20 120.11 8327.11 685.20 67589.15 00:14:32.068 00:14:32.068 13:44:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 014da7ab-e98f-4606-8e20-788fa216487f 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 33d40ed8-0e18-4a29-8f3b-b9f682b7a47b 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:32.068 rmmod nvme_tcp 00:14:32.068 rmmod nvme_fabrics 00:14:32.068 rmmod nvme_keyring 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1019834 ']' 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1019834 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1019834 ']' 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1019834 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1019834 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1019834' 00:14:32.068 killing process with pid 1019834 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1019834 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1019834 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.068 13:44:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.453 13:44:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:33.453 00:14:33.453 real 0m23.120s 00:14:33.453 user 1m4.098s 00:14:33.453 sys 0m7.545s 00:14:33.453 13:44:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:33.453 13:44:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:33.453 ************************************ 00:14:33.453 END TEST nvmf_lvol 00:14:33.453 ************************************ 00:14:33.453 13:44:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:33.453 13:44:59 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:33.453 13:44:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:33.453 13:44:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:33.453 13:44:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:33.453 ************************************ 00:14:33.453 START TEST nvmf_lvs_grow 00:14:33.453 ************************************ 00:14:33.453 13:44:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:33.713 * Looking for test storage... 00:14:33.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.713 13:45:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.713 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:33.713 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.713 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.713 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.713 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.713 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.713 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.713 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.713 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.713 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.713 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.713 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:33.713 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:33.713 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:33.714 13:45:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:40.302 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:40.302 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:40.302 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:40.302 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:40.302 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:40.302 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:40.302 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:40.302 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:40.302 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:40.302 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:40.302 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:40.302 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:40.302 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:40.302 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:40.302 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:40.302 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:40.302 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:40.302 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:40.302 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:40.303 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:40.303 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:40.303 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:40.303 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:40.303 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:40.564 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:40.564 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:40.564 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:40.564 13:45:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:40.564 13:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:40.564 13:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:40.564 13:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:40.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:14:40.564 00:14:40.564 --- 10.0.0.2 ping statistics --- 00:14:40.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.564 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:14:40.564 13:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:40.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.365 ms 00:14:40.564 00:14:40.564 --- 10.0.0.1 ping statistics --- 00:14:40.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.564 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:14:40.564 13:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.564 13:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:40.564 13:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:40.564 13:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.564 13:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:40.564 13:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:40.564 13:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.564 13:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:40.564 13:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:40.824 13:45:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:40.824 13:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:40.824 13:45:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:40.825 13:45:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:40.825 13:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1026886 00:14:40.825 13:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1026886 00:14:40.825 13:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:40.825 13:45:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1026886 ']' 00:14:40.825 13:45:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.825 13:45:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.825 13:45:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.825 13:45:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.825 13:45:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:40.825 [2024-07-15 13:45:07.174657] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:40.825 [2024-07-15 13:45:07.174724] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.825 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.825 [2024-07-15 13:45:07.244540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.825 [2024-07-15 13:45:07.319361] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.825 [2024-07-15 13:45:07.319399] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.825 [2024-07-15 13:45:07.319406] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.825 [2024-07-15 13:45:07.319412] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.825 [2024-07-15 13:45:07.319418] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.825 [2024-07-15 13:45:07.319437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.766 13:45:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.766 13:45:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:41.766 13:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:41.766 13:45:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:41.766 13:45:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:41.766 13:45:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.766 13:45:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:41.766 [2024-07-15 13:45:08.122465] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.766 13:45:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:41.766 13:45:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:41.766 13:45:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.766 13:45:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:41.766 ************************************ 00:14:41.766 START TEST lvs_grow_clean 00:14:41.766 ************************************ 00:14:41.766 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:41.766 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:41.766 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:41.766 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:41.766 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:41.766 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:41.766 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:41.766 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:41.766 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:41.766 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:42.026 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:42.026 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:42.285 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=262768b5-3376-4067-a6a9-958c055d9f03 00:14:42.285 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 262768b5-3376-4067-a6a9-958c055d9f03 00:14:42.285 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:42.285 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:42.285 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:42.285 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 262768b5-3376-4067-a6a9-958c055d9f03 lvol 150 00:14:42.544 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=964225a3-e7cc-478e-b8e3-4fc4170e1f7e 00:14:42.544 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:42.544 13:45:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:42.544 [2024-07-15 13:45:09.000152] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:42.544 [2024-07-15 13:45:09.000205] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:42.544 true 00:14:42.544 13:45:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:42.544 13:45:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 262768b5-3376-4067-a6a9-958c055d9f03 00:14:42.805 13:45:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:42.805 13:45:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:42.805 13:45:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 964225a3-e7cc-478e-b8e3-4fc4170e1f7e 00:14:43.065 13:45:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:43.326 [2024-07-15 13:45:09.618026] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.326 13:45:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:43.326 13:45:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1027512 00:14:43.326 13:45:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:43.326 13:45:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:43.326 13:45:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1027512 /var/tmp/bdevperf.sock 00:14:43.326 13:45:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1027512 ']' 00:14:43.326 13:45:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:43.326 13:45:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.326 13:45:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:43.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:43.326 13:45:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.326 13:45:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:43.326 [2024-07-15 13:45:09.838342] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:43.326 [2024-07-15 13:45:09.838396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027512 ] 00:14:43.587 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.587 [2024-07-15 13:45:09.915330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.587 [2024-07-15 13:45:09.979855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.157 13:45:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:44.157 13:45:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:44.157 13:45:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:44.417 Nvme0n1 00:14:44.417 13:45:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:44.678 [ 00:14:44.678 { 00:14:44.678 "name": "Nvme0n1", 00:14:44.678 "aliases": [ 00:14:44.678 "964225a3-e7cc-478e-b8e3-4fc4170e1f7e" 00:14:44.678 ], 00:14:44.678 "product_name": "NVMe disk", 00:14:44.678 "block_size": 4096, 00:14:44.678 "num_blocks": 38912, 00:14:44.678 "uuid": "964225a3-e7cc-478e-b8e3-4fc4170e1f7e", 00:14:44.678 "assigned_rate_limits": { 00:14:44.678 "rw_ios_per_sec": 0, 00:14:44.678 "rw_mbytes_per_sec": 0, 00:14:44.678 "r_mbytes_per_sec": 0, 00:14:44.678 "w_mbytes_per_sec": 0 00:14:44.678 }, 00:14:44.678 "claimed": false, 00:14:44.678 "zoned": false, 00:14:44.678 "supported_io_types": { 00:14:44.678 "read": true, 00:14:44.678 "write": true, 00:14:44.678 "unmap": true, 00:14:44.678 "flush": true, 00:14:44.678 "reset": true, 00:14:44.678 "nvme_admin": true, 00:14:44.678 "nvme_io": true, 00:14:44.678 "nvme_io_md": false, 00:14:44.678 "write_zeroes": true, 00:14:44.678 "zcopy": false, 00:14:44.678 "get_zone_info": false, 00:14:44.678 "zone_management": false, 00:14:44.678 "zone_append": false, 00:14:44.678 "compare": true, 00:14:44.678 "compare_and_write": true, 00:14:44.678 "abort": true, 00:14:44.678 "seek_hole": false, 00:14:44.678 "seek_data": false, 00:14:44.678 "copy": true, 00:14:44.678 "nvme_iov_md": false 00:14:44.678 }, 00:14:44.678 "memory_domains": [ 00:14:44.678 { 00:14:44.678 "dma_device_id": "system", 00:14:44.678 "dma_device_type": 1 00:14:44.678 } 00:14:44.678 ], 00:14:44.678 "driver_specific": { 00:14:44.678 "nvme": [ 00:14:44.678 { 00:14:44.678 "trid": { 00:14:44.678 "trtype": "TCP", 00:14:44.678 "adrfam": "IPv4", 00:14:44.678 "traddr": "10.0.0.2", 00:14:44.678 "trsvcid": "4420", 00:14:44.678 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:44.678 }, 00:14:44.678 "ctrlr_data": { 00:14:44.678 "cntlid": 1, 00:14:44.678 "vendor_id": "0x8086", 00:14:44.678 "model_number": "SPDK bdev Controller", 00:14:44.678 "serial_number": "SPDK0", 00:14:44.678 "firmware_revision": "24.09", 00:14:44.678 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:44.678 "oacs": { 00:14:44.678 "security": 0, 00:14:44.678 "format": 0, 00:14:44.678 "firmware": 0, 00:14:44.678 "ns_manage": 0 00:14:44.678 }, 00:14:44.678 "multi_ctrlr": true, 00:14:44.678 "ana_reporting": false 00:14:44.678 }, 00:14:44.678 "vs": { 00:14:44.678 "nvme_version": "1.3" 00:14:44.678 }, 00:14:44.678 "ns_data": { 00:14:44.678 "id": 1, 00:14:44.678 "can_share": true 00:14:44.678 } 00:14:44.678 } 00:14:44.678 ], 00:14:44.678 "mp_policy": "active_passive" 00:14:44.678 } 00:14:44.678 } 00:14:44.678 ] 00:14:44.678 13:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1028032 00:14:44.678 13:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:44.678 13:45:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:44.678 Running I/O for 10 seconds... 00:14:46.062 Latency(us) 00:14:46.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.062 Nvme0n1 : 1.00 18127.00 70.81 0.00 0.00 0.00 0.00 0.00 00:14:46.062 =================================================================================================================== 00:14:46.062 Total : 18127.00 70.81 0.00 0.00 0.00 0.00 0.00 00:14:46.062 00:14:46.633 13:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 262768b5-3376-4067-a6a9-958c055d9f03 00:14:46.893 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.893 Nvme0n1 : 2.00 18252.50 71.30 0.00 0.00 0.00 0.00 0.00 00:14:46.893 =================================================================================================================== 00:14:46.893 Total : 18252.50 71.30 0.00 0.00 0.00 0.00 0.00 00:14:46.893 00:14:46.893 true 00:14:46.893 13:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 262768b5-3376-4067-a6a9-958c055d9f03 00:14:46.893 13:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:47.153 13:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:47.153 13:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:47.153 13:45:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1028032 00:14:47.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.807 Nvme0n1 : 3.00 18306.33 71.51 0.00 0.00 0.00 0.00 0.00 00:14:47.807 =================================================================================================================== 00:14:47.807 Total : 18306.33 71.51 0.00 0.00 0.00 0.00 0.00 00:14:47.807 00:14:48.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.745 Nvme0n1 : 4.00 18350.50 71.68 0.00 0.00 0.00 0.00 0.00 00:14:48.745 =================================================================================================================== 00:14:48.745 Total : 18350.50 71.68 0.00 0.00 0.00 0.00 0.00 00:14:48.745 00:14:49.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.686 Nvme0n1 : 5.00 18371.00 71.76 0.00 0.00 0.00 0.00 0.00 00:14:49.686 =================================================================================================================== 00:14:49.686 Total : 18371.00 71.76 0.00 0.00 0.00 0.00 0.00 00:14:49.686 00:14:51.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.066 Nvme0n1 : 6.00 18390.50 71.84 0.00 0.00 0.00 0.00 0.00 00:14:51.066 =================================================================================================================== 00:14:51.066 Total : 18390.50 71.84 0.00 0.00 0.00 0.00 0.00 00:14:51.066 00:14:52.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.004 Nvme0n1 : 7.00 18414.71 71.93 0.00 0.00 0.00 0.00 0.00 00:14:52.004 =================================================================================================================== 00:14:52.004 Total : 18414.71 71.93 0.00 0.00 0.00 0.00 0.00 00:14:52.004 00:14:52.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.945 Nvme0n1 : 8.00 18425.88 71.98 0.00 0.00 0.00 0.00 0.00 00:14:52.945 =================================================================================================================== 00:14:52.945 Total : 18425.88 71.98 0.00 0.00 0.00 0.00 0.00 00:14:52.945 00:14:53.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.883 Nvme0n1 : 9.00 18425.67 71.98 0.00 0.00 0.00 0.00 0.00 00:14:53.883 =================================================================================================================== 00:14:53.883 Total : 18425.67 71.98 0.00 0.00 0.00 0.00 0.00 00:14:53.883 00:14:54.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.823 Nvme0n1 : 10.00 18438.60 72.03 0.00 0.00 0.00 0.00 0.00 00:14:54.823 =================================================================================================================== 00:14:54.823 Total : 18438.60 72.03 0.00 0.00 0.00 0.00 0.00 00:14:54.823 00:14:54.823 00:14:54.823 Latency(us) 00:14:54.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.823 Nvme0n1 : 10.01 18438.70 72.03 0.00 0.00 6937.62 4642.13 13161.81 00:14:54.823 =================================================================================================================== 00:14:54.823 Total : 18438.70 72.03 0.00 0.00 6937.62 4642.13 13161.81 00:14:54.823 0 00:14:54.823 13:45:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1027512 00:14:54.823 13:45:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1027512 ']' 00:14:54.823 13:45:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1027512 00:14:54.823 13:45:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:54.823 13:45:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.823 13:45:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1027512 00:14:54.823 13:45:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:54.823 13:45:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:54.823 13:45:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1027512' 00:14:54.823 killing process with pid 1027512 00:14:54.823 13:45:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1027512 00:14:54.823 Received shutdown signal, test time was about 10.000000 seconds 00:14:54.823 00:14:54.823 Latency(us) 00:14:54.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.823 =================================================================================================================== 00:14:54.823 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:54.823 13:45:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1027512 00:14:55.083 13:45:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:55.083 13:45:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:55.342 13:45:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 262768b5-3376-4067-a6a9-958c055d9f03 00:14:55.342 13:45:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:55.600 13:45:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:55.600 13:45:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:55.600 13:45:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:55.600 [2024-07-15 13:45:22.041310] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:55.600 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 262768b5-3376-4067-a6a9-958c055d9f03 00:14:55.600 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:55.600 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 262768b5-3376-4067-a6a9-958c055d9f03 00:14:55.600 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.600 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.600 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.600 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.600 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.600 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.600 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.600 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:55.600 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 262768b5-3376-4067-a6a9-958c055d9f03 00:14:55.859 request: 00:14:55.859 { 00:14:55.859 "uuid": "262768b5-3376-4067-a6a9-958c055d9f03", 00:14:55.859 "method": "bdev_lvol_get_lvstores", 00:14:55.859 "req_id": 1 00:14:55.859 } 00:14:55.859 Got JSON-RPC error response 00:14:55.859 response: 00:14:55.859 { 00:14:55.859 "code": -19, 00:14:55.859 "message": "No such device" 00:14:55.859 } 00:14:55.859 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:55.859 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:55.859 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:55.859 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:55.859 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:56.119 aio_bdev 00:14:56.119 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 964225a3-e7cc-478e-b8e3-4fc4170e1f7e 00:14:56.119 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=964225a3-e7cc-478e-b8e3-4fc4170e1f7e 00:14:56.119 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:56.119 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:56.119 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:56.119 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:56.119 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:56.119 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 964225a3-e7cc-478e-b8e3-4fc4170e1f7e -t 2000 00:14:56.378 [ 00:14:56.378 { 00:14:56.378 "name": "964225a3-e7cc-478e-b8e3-4fc4170e1f7e", 00:14:56.378 "aliases": [ 00:14:56.378 "lvs/lvol" 00:14:56.378 ], 00:14:56.378 "product_name": "Logical Volume", 00:14:56.378 "block_size": 4096, 00:14:56.378 "num_blocks": 38912, 00:14:56.378 "uuid": "964225a3-e7cc-478e-b8e3-4fc4170e1f7e", 00:14:56.378 "assigned_rate_limits": { 00:14:56.378 "rw_ios_per_sec": 0, 00:14:56.378 "rw_mbytes_per_sec": 0, 00:14:56.378 "r_mbytes_per_sec": 0, 00:14:56.378 "w_mbytes_per_sec": 0 00:14:56.378 }, 00:14:56.378 "claimed": false, 00:14:56.378 "zoned": false, 00:14:56.378 "supported_io_types": { 00:14:56.378 "read": true, 00:14:56.378 "write": true, 00:14:56.378 "unmap": true, 00:14:56.378 "flush": false, 00:14:56.378 "reset": true, 00:14:56.378 "nvme_admin": false, 00:14:56.378 "nvme_io": false, 00:14:56.378 "nvme_io_md": false, 00:14:56.378 "write_zeroes": true, 00:14:56.378 "zcopy": false, 00:14:56.378 "get_zone_info": false, 00:14:56.378 "zone_management": false, 00:14:56.378 "zone_append": false, 00:14:56.378 "compare": false, 00:14:56.378 "compare_and_write": false, 00:14:56.378 "abort": false, 00:14:56.378 "seek_hole": true, 00:14:56.378 "seek_data": true, 00:14:56.378 "copy": false, 00:14:56.378 "nvme_iov_md": false 00:14:56.378 }, 00:14:56.378 "driver_specific": { 00:14:56.378 "lvol": { 00:14:56.378 "lvol_store_uuid": "262768b5-3376-4067-a6a9-958c055d9f03", 00:14:56.378 "base_bdev": "aio_bdev", 00:14:56.378 "thin_provision": false, 00:14:56.378 "num_allocated_clusters": 38, 00:14:56.378 "snapshot": false, 00:14:56.378 "clone": false, 00:14:56.378 "esnap_clone": false 00:14:56.378 } 00:14:56.378 } 00:14:56.378 } 00:14:56.378 ] 00:14:56.378 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:56.378 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 262768b5-3376-4067-a6a9-958c055d9f03 00:14:56.378 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:56.378 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:56.378 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 262768b5-3376-4067-a6a9-958c055d9f03 00:14:56.378 13:45:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:56.638 13:45:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:56.638 13:45:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 964225a3-e7cc-478e-b8e3-4fc4170e1f7e 00:14:56.898 13:45:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 262768b5-3376-4067-a6a9-958c055d9f03 00:14:56.898 13:45:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:57.157 13:45:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:57.157 00:14:57.157 real 0m15.367s 00:14:57.157 user 0m15.064s 00:14:57.157 sys 0m1.326s 00:14:57.157 13:45:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:57.157 13:45:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:57.157 ************************************ 00:14:57.157 END TEST lvs_grow_clean 00:14:57.157 ************************************ 00:14:57.157 13:45:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:57.157 13:45:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:57.157 13:45:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:57.157 13:45:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.157 13:45:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:57.157 ************************************ 00:14:57.157 START TEST lvs_grow_dirty 00:14:57.157 ************************************ 00:14:57.157 13:45:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:57.157 13:45:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:57.157 13:45:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:57.157 13:45:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:57.157 13:45:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:57.157 13:45:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:57.157 13:45:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:57.157 13:45:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:57.157 13:45:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:57.157 13:45:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:57.416 13:45:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:57.416 13:45:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:57.675 13:45:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9b79dff1-4b1a-4525-af6d-793291c67281 00:14:57.675 13:45:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b79dff1-4b1a-4525-af6d-793291c67281 00:14:57.675 13:45:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:57.675 13:45:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:57.675 13:45:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:57.675 13:45:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9b79dff1-4b1a-4525-af6d-793291c67281 lvol 150 00:14:57.934 13:45:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c79e7ffd-b6e7-4a4e-a9c0-abb046aa5664 00:14:57.934 13:45:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:57.934 13:45:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:57.934 [2024-07-15 13:45:24.441711] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:57.934 [2024-07-15 13:45:24.441761] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:57.934 true 00:14:57.934 13:45:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b79dff1-4b1a-4525-af6d-793291c67281 00:14:57.935 13:45:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:58.195 13:45:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:58.195 13:45:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:58.455 13:45:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c79e7ffd-b6e7-4a4e-a9c0-abb046aa5664 00:14:58.456 13:45:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:58.715 [2024-07-15 13:45:25.031520] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.715 13:45:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:58.715 13:45:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1030826 00:14:58.715 13:45:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:58.715 13:45:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:58.715 13:45:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1030826 /var/tmp/bdevperf.sock 00:14:58.715 13:45:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1030826 ']' 00:14:58.715 13:45:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:58.715 13:45:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.715 13:45:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:58.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:58.715 13:45:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.715 13:45:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:58.976 [2024-07-15 13:45:25.244364] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:58.976 [2024-07-15 13:45:25.244414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030826 ] 00:14:58.976 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.976 [2024-07-15 13:45:25.317405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.976 [2024-07-15 13:45:25.371309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.544 13:45:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.544 13:45:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:59.544 13:45:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:59.803 Nvme0n1 00:14:59.803 13:45:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:00.063 [ 00:15:00.063 { 00:15:00.063 "name": "Nvme0n1", 00:15:00.063 "aliases": [ 00:15:00.063 "c79e7ffd-b6e7-4a4e-a9c0-abb046aa5664" 00:15:00.063 ], 00:15:00.063 "product_name": "NVMe disk", 00:15:00.063 "block_size": 4096, 00:15:00.063 "num_blocks": 38912, 00:15:00.063 "uuid": "c79e7ffd-b6e7-4a4e-a9c0-abb046aa5664", 00:15:00.063 "assigned_rate_limits": { 00:15:00.063 "rw_ios_per_sec": 0, 00:15:00.063 "rw_mbytes_per_sec": 0, 00:15:00.063 "r_mbytes_per_sec": 0, 00:15:00.063 "w_mbytes_per_sec": 0 00:15:00.063 }, 00:15:00.063 "claimed": false, 00:15:00.063 "zoned": false, 00:15:00.063 "supported_io_types": { 00:15:00.063 "read": true, 00:15:00.063 "write": true, 00:15:00.063 "unmap": true, 00:15:00.063 "flush": true, 00:15:00.063 "reset": true, 00:15:00.063 "nvme_admin": true, 00:15:00.063 "nvme_io": true, 00:15:00.063 "nvme_io_md": false, 00:15:00.063 "write_zeroes": true, 00:15:00.063 "zcopy": false, 00:15:00.063 "get_zone_info": false, 00:15:00.063 "zone_management": false, 00:15:00.063 "zone_append": false, 00:15:00.063 "compare": true, 00:15:00.063 "compare_and_write": true, 00:15:00.063 "abort": true, 00:15:00.063 "seek_hole": false, 00:15:00.063 "seek_data": false, 00:15:00.063 "copy": true, 00:15:00.063 "nvme_iov_md": false 00:15:00.063 }, 00:15:00.063 "memory_domains": [ 00:15:00.063 { 00:15:00.063 "dma_device_id": "system", 00:15:00.063 "dma_device_type": 1 00:15:00.063 } 00:15:00.063 ], 00:15:00.063 "driver_specific": { 00:15:00.063 "nvme": [ 00:15:00.063 { 00:15:00.063 "trid": { 00:15:00.063 "trtype": "TCP", 00:15:00.063 "adrfam": "IPv4", 00:15:00.063 "traddr": "10.0.0.2", 00:15:00.063 "trsvcid": "4420", 00:15:00.063 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:00.063 }, 00:15:00.063 "ctrlr_data": { 00:15:00.063 "cntlid": 1, 00:15:00.063 "vendor_id": "0x8086", 00:15:00.063 "model_number": "SPDK bdev Controller", 00:15:00.063 "serial_number": "SPDK0", 00:15:00.063 "firmware_revision": "24.09", 00:15:00.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:00.063 "oacs": { 00:15:00.063 "security": 0, 00:15:00.063 "format": 0, 00:15:00.063 "firmware": 0, 00:15:00.063 "ns_manage": 0 00:15:00.063 }, 00:15:00.063 "multi_ctrlr": true, 00:15:00.063 "ana_reporting": false 00:15:00.063 }, 00:15:00.063 "vs": { 00:15:00.063 "nvme_version": "1.3" 00:15:00.063 }, 00:15:00.063 "ns_data": { 00:15:00.063 "id": 1, 00:15:00.063 "can_share": true 00:15:00.063 } 00:15:00.063 } 00:15:00.063 ], 00:15:00.063 "mp_policy": "active_passive" 00:15:00.063 } 00:15:00.063 } 00:15:00.063 ] 00:15:00.063 13:45:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1031157 00:15:00.063 13:45:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:00.063 13:45:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:00.063 Running I/O for 10 seconds... 00:15:01.004 Latency(us) 00:15:01.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.004 Nvme0n1 : 1.00 17540.00 68.52 0.00 0.00 0.00 0.00 0.00 00:15:01.004 =================================================================================================================== 00:15:01.004 Total : 17540.00 68.52 0.00 0.00 0.00 0.00 0.00 00:15:01.004 00:15:01.942 13:45:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9b79dff1-4b1a-4525-af6d-793291c67281 00:15:02.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.203 Nvme0n1 : 2.00 17666.00 69.01 0.00 0.00 0.00 0.00 0.00 00:15:02.203 =================================================================================================================== 00:15:02.203 Total : 17666.00 69.01 0.00 0.00 0.00 0.00 0.00 00:15:02.203 00:15:02.203 true 00:15:02.203 13:45:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b79dff1-4b1a-4525-af6d-793291c67281 00:15:02.203 13:45:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:02.463 13:45:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:02.463 13:45:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:02.463 13:45:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1031157 00:15:03.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.069 Nvme0n1 : 3.00 17713.33 69.19 0.00 0.00 0.00 0.00 0.00 00:15:03.069 =================================================================================================================== 00:15:03.069 Total : 17713.33 69.19 0.00 0.00 0.00 0.00 0.00 00:15:03.069 00:15:04.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.015 Nvme0n1 : 4.00 17743.00 69.31 0.00 0.00 0.00 0.00 0.00 00:15:04.015 =================================================================================================================== 00:15:04.015 Total : 17743.00 69.31 0.00 0.00 0.00 0.00 0.00 00:15:04.015 00:15:05.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.399 Nvme0n1 : 5.00 17770.40 69.42 0.00 0.00 0.00 0.00 0.00 00:15:05.399 =================================================================================================================== 00:15:05.399 Total : 17770.40 69.42 0.00 0.00 0.00 0.00 0.00 00:15:05.399 00:15:06.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.339 Nvme0n1 : 6.00 17794.00 69.51 0.00 0.00 0.00 0.00 0.00 00:15:06.339 =================================================================================================================== 00:15:06.339 Total : 17794.00 69.51 0.00 0.00 0.00 0.00 0.00 00:15:06.339 00:15:07.282 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.282 Nvme0n1 : 7.00 17814.29 69.59 0.00 0.00 0.00 0.00 0.00 00:15:07.282 =================================================================================================================== 00:15:07.282 Total : 17814.29 69.59 0.00 0.00 0.00 0.00 0.00 00:15:07.282 00:15:08.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.223 Nvme0n1 : 8.00 17829.50 69.65 0.00 0.00 0.00 0.00 0.00 00:15:08.223 =================================================================================================================== 00:15:08.223 Total : 17829.50 69.65 0.00 0.00 0.00 0.00 0.00 00:15:08.223 00:15:09.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.163 Nvme0n1 : 9.00 17844.89 69.71 0.00 0.00 0.00 0.00 0.00 00:15:09.163 =================================================================================================================== 00:15:09.163 Total : 17844.89 69.71 0.00 0.00 0.00 0.00 0.00 00:15:09.163 00:15:10.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.103 Nvme0n1 : 10.00 17854.80 69.75 0.00 0.00 0.00 0.00 0.00 00:15:10.103 =================================================================================================================== 00:15:10.103 Total : 17854.80 69.75 0.00 0.00 0.00 0.00 0.00 00:15:10.103 00:15:10.103 00:15:10.103 Latency(us) 00:15:10.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.103 Nvme0n1 : 10.01 17855.12 69.75 0.00 0.00 7164.15 6116.69 15947.09 00:15:10.103 =================================================================================================================== 00:15:10.103 Total : 17855.12 69.75 0.00 0.00 7164.15 6116.69 15947.09 00:15:10.103 0 00:15:10.103 13:45:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1030826 00:15:10.103 13:45:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1030826 ']' 00:15:10.103 13:45:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1030826 00:15:10.103 13:45:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:10.103 13:45:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:10.103 13:45:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1030826 00:15:10.103 13:45:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:10.103 13:45:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:10.103 13:45:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1030826' 00:15:10.103 killing process with pid 1030826 00:15:10.103 13:45:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1030826 00:15:10.103 Received shutdown signal, test time was about 10.000000 seconds 00:15:10.103 00:15:10.103 Latency(us) 00:15:10.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.103 =================================================================================================================== 00:15:10.103 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.103 13:45:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1030826 00:15:10.363 13:45:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:10.623 13:45:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:10.623 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b79dff1-4b1a-4525-af6d-793291c67281 00:15:10.623 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:10.882 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:10.882 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:10.882 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1026886 00:15:10.882 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1026886 00:15:10.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1026886 Killed "${NVMF_APP[@]}" "$@" 00:15:10.882 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:10.882 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:10.882 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:10.882 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:10.882 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:10.882 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1033188 00:15:10.882 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1033188 00:15:10.882 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:10.882 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1033188 ']' 00:15:10.882 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.882 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:10.882 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.882 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:10.882 13:45:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:10.882 [2024-07-15 13:45:37.331762] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:10.882 [2024-07-15 13:45:37.331822] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.882 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.882 [2024-07-15 13:45:37.398369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.143 [2024-07-15 13:45:37.463526] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.143 [2024-07-15 13:45:37.463561] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.143 [2024-07-15 13:45:37.463568] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.143 [2024-07-15 13:45:37.463575] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.143 [2024-07-15 13:45:37.463580] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.143 [2024-07-15 13:45:37.463604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.713 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:11.713 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:11.713 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:11.713 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:11.713 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:11.713 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.713 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:11.971 [2024-07-15 13:45:38.276519] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:11.971 [2024-07-15 13:45:38.276609] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:11.971 [2024-07-15 13:45:38.276639] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:11.972 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:11.972 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c79e7ffd-b6e7-4a4e-a9c0-abb046aa5664 00:15:11.972 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=c79e7ffd-b6e7-4a4e-a9c0-abb046aa5664 00:15:11.972 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:11.972 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:11.972 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:11.972 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:11.972 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:11.972 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c79e7ffd-b6e7-4a4e-a9c0-abb046aa5664 -t 2000 00:15:12.232 [ 00:15:12.232 { 00:15:12.232 "name": "c79e7ffd-b6e7-4a4e-a9c0-abb046aa5664", 00:15:12.232 "aliases": [ 00:15:12.232 "lvs/lvol" 00:15:12.232 ], 00:15:12.232 "product_name": "Logical Volume", 00:15:12.232 "block_size": 4096, 00:15:12.232 "num_blocks": 38912, 00:15:12.232 "uuid": "c79e7ffd-b6e7-4a4e-a9c0-abb046aa5664", 00:15:12.232 "assigned_rate_limits": { 00:15:12.232 "rw_ios_per_sec": 0, 00:15:12.232 "rw_mbytes_per_sec": 0, 00:15:12.232 "r_mbytes_per_sec": 0, 00:15:12.232 "w_mbytes_per_sec": 0 00:15:12.232 }, 00:15:12.232 "claimed": false, 00:15:12.232 "zoned": false, 00:15:12.232 "supported_io_types": { 00:15:12.232 "read": true, 00:15:12.232 "write": true, 00:15:12.232 "unmap": true, 00:15:12.232 "flush": false, 00:15:12.232 "reset": true, 00:15:12.232 "nvme_admin": false, 00:15:12.232 "nvme_io": false, 00:15:12.232 "nvme_io_md": false, 00:15:12.232 "write_zeroes": true, 00:15:12.232 "zcopy": false, 00:15:12.232 "get_zone_info": false, 00:15:12.232 "zone_management": false, 00:15:12.232 "zone_append": false, 00:15:12.232 "compare": false, 00:15:12.232 "compare_and_write": false, 00:15:12.232 "abort": false, 00:15:12.232 "seek_hole": true, 00:15:12.232 "seek_data": true, 00:15:12.232 "copy": false, 00:15:12.232 "nvme_iov_md": false 00:15:12.232 }, 00:15:12.232 "driver_specific": { 00:15:12.232 "lvol": { 00:15:12.232 "lvol_store_uuid": "9b79dff1-4b1a-4525-af6d-793291c67281", 00:15:12.232 "base_bdev": "aio_bdev", 00:15:12.232 "thin_provision": false, 00:15:12.232 "num_allocated_clusters": 38, 00:15:12.232 "snapshot": false, 00:15:12.232 "clone": false, 00:15:12.232 "esnap_clone": false 00:15:12.232 } 00:15:12.232 } 00:15:12.232 } 00:15:12.232 ] 00:15:12.232 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:12.232 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b79dff1-4b1a-4525-af6d-793291c67281 00:15:12.232 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:12.492 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:12.492 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b79dff1-4b1a-4525-af6d-793291c67281 00:15:12.492 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:12.492 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:12.492 13:45:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:12.753 [2024-07-15 13:45:39.060444] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:12.753 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b79dff1-4b1a-4525-af6d-793291c67281 00:15:12.753 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:12.753 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b79dff1-4b1a-4525-af6d-793291c67281 00:15:12.753 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.753 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:12.753 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.753 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:12.753 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.753 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:12.753 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.753 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:12.753 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b79dff1-4b1a-4525-af6d-793291c67281 00:15:12.753 request: 00:15:12.753 { 00:15:12.753 "uuid": "9b79dff1-4b1a-4525-af6d-793291c67281", 00:15:12.753 "method": "bdev_lvol_get_lvstores", 00:15:12.753 "req_id": 1 00:15:12.753 } 00:15:12.753 Got JSON-RPC error response 00:15:12.753 response: 00:15:12.753 { 00:15:12.753 "code": -19, 00:15:12.753 "message": "No such device" 00:15:12.753 } 00:15:13.013 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:13.013 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:13.013 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:13.013 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:13.013 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:13.013 aio_bdev 00:15:13.013 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c79e7ffd-b6e7-4a4e-a9c0-abb046aa5664 00:15:13.013 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=c79e7ffd-b6e7-4a4e-a9c0-abb046aa5664 00:15:13.013 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:13.013 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:13.013 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:13.013 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:13.013 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:13.273 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c79e7ffd-b6e7-4a4e-a9c0-abb046aa5664 -t 2000 00:15:13.273 [ 00:15:13.273 { 00:15:13.273 "name": "c79e7ffd-b6e7-4a4e-a9c0-abb046aa5664", 00:15:13.273 "aliases": [ 00:15:13.273 "lvs/lvol" 00:15:13.273 ], 00:15:13.273 "product_name": "Logical Volume", 00:15:13.273 "block_size": 4096, 00:15:13.273 "num_blocks": 38912, 00:15:13.273 "uuid": "c79e7ffd-b6e7-4a4e-a9c0-abb046aa5664", 00:15:13.273 "assigned_rate_limits": { 00:15:13.273 "rw_ios_per_sec": 0, 00:15:13.273 "rw_mbytes_per_sec": 0, 00:15:13.273 "r_mbytes_per_sec": 0, 00:15:13.273 "w_mbytes_per_sec": 0 00:15:13.273 }, 00:15:13.273 "claimed": false, 00:15:13.273 "zoned": false, 00:15:13.273 "supported_io_types": { 00:15:13.273 "read": true, 00:15:13.273 "write": true, 00:15:13.273 "unmap": true, 00:15:13.273 "flush": false, 00:15:13.273 "reset": true, 00:15:13.273 "nvme_admin": false, 00:15:13.273 "nvme_io": false, 00:15:13.273 "nvme_io_md": false, 00:15:13.273 "write_zeroes": true, 00:15:13.273 "zcopy": false, 00:15:13.273 "get_zone_info": false, 00:15:13.273 "zone_management": false, 00:15:13.273 "zone_append": false, 00:15:13.273 "compare": false, 00:15:13.273 "compare_and_write": false, 00:15:13.273 "abort": false, 00:15:13.273 "seek_hole": true, 00:15:13.273 "seek_data": true, 00:15:13.273 "copy": false, 00:15:13.274 "nvme_iov_md": false 00:15:13.274 }, 00:15:13.274 "driver_specific": { 00:15:13.274 "lvol": { 00:15:13.274 "lvol_store_uuid": "9b79dff1-4b1a-4525-af6d-793291c67281", 00:15:13.274 "base_bdev": "aio_bdev", 00:15:13.274 "thin_provision": false, 00:15:13.274 "num_allocated_clusters": 38, 00:15:13.274 "snapshot": false, 00:15:13.274 "clone": false, 00:15:13.274 "esnap_clone": false 00:15:13.274 } 00:15:13.274 } 00:15:13.274 } 00:15:13.274 ] 00:15:13.274 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:13.274 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b79dff1-4b1a-4525-af6d-793291c67281 00:15:13.274 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:13.534 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:13.534 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b79dff1-4b1a-4525-af6d-793291c67281 00:15:13.534 13:45:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:13.534 13:45:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:13.534 13:45:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c79e7ffd-b6e7-4a4e-a9c0-abb046aa5664 00:15:13.794 13:45:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9b79dff1-4b1a-4525-af6d-793291c67281 00:15:14.054 13:45:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:14.054 13:45:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:14.054 00:15:14.054 real 0m16.946s 00:15:14.054 user 0m44.259s 00:15:14.054 sys 0m3.048s 00:15:14.054 13:45:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:14.054 13:45:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:14.316 ************************************ 00:15:14.316 END TEST lvs_grow_dirty 00:15:14.316 ************************************ 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:14.316 nvmf_trace.0 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:14.316 rmmod nvme_tcp 00:15:14.316 rmmod nvme_fabrics 00:15:14.316 rmmod nvme_keyring 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1033188 ']' 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1033188 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1033188 ']' 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1033188 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1033188 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1033188' 00:15:14.316 killing process with pid 1033188 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1033188 00:15:14.316 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1033188 00:15:14.576 13:45:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:14.576 13:45:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:14.576 13:45:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:14.576 13:45:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:14.576 13:45:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:14.576 13:45:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.576 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.576 13:45:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.130 13:45:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:17.130 00:15:17.130 real 0m43.124s 00:15:17.130 user 1m5.273s 00:15:17.130 sys 0m10.084s 00:15:17.130 13:45:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:17.130 13:45:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:17.130 ************************************ 00:15:17.130 END TEST nvmf_lvs_grow 00:15:17.130 ************************************ 00:15:17.130 13:45:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:17.130 13:45:43 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:17.130 13:45:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:17.130 13:45:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.130 13:45:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:17.130 ************************************ 00:15:17.130 START TEST nvmf_bdev_io_wait 00:15:17.130 ************************************ 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:17.130 * Looking for test storage... 00:15:17.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:17.130 13:45:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:23.759 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:23.759 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:23.759 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.759 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:23.760 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:23.760 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.760 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:23.760 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:23.760 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:23.760 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:23.760 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:23.760 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:23.760 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:23.760 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:23.760 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:23.760 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:23.760 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:23.760 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:23.760 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:23.760 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:23.760 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:23.760 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:23.760 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:23.760 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:24.026 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:24.026 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:24.026 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:24.026 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:24.026 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:24.026 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:24.026 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:24.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:15:24.026 00:15:24.026 --- 10.0.0.2 ping statistics --- 00:15:24.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.026 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:15:24.026 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:24.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:15:24.026 00:15:24.026 --- 10.0.0.1 ping statistics --- 00:15:24.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.026 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:15:24.026 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.026 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:24.026 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:24.026 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.026 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:24.026 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:24.026 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.026 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:24.026 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:24.285 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:24.285 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:24.285 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:24.285 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:24.285 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1038232 00:15:24.285 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1038232 00:15:24.285 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:24.285 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1038232 ']' 00:15:24.285 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.285 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:24.285 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.285 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:24.285 13:45:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:24.285 [2024-07-15 13:45:50.619290] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:24.285 [2024-07-15 13:45:50.619343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.285 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.285 [2024-07-15 13:45:50.685548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:24.285 [2024-07-15 13:45:50.754081] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.285 [2024-07-15 13:45:50.754115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.285 [2024-07-15 13:45:50.754128] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.285 [2024-07-15 13:45:50.754138] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.285 [2024-07-15 13:45:50.754144] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.285 [2024-07-15 13:45:50.754217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.285 [2024-07-15 13:45:50.754329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:24.285 [2024-07-15 13:45:50.754484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.285 [2024-07-15 13:45:50.754486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.225 [2024-07-15 13:45:51.502648] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.225 Malloc0 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.225 [2024-07-15 13:45:51.571402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1038352 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1038354 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:25.225 { 00:15:25.225 "params": { 00:15:25.225 "name": "Nvme$subsystem", 00:15:25.225 "trtype": "$TEST_TRANSPORT", 00:15:25.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:25.225 "adrfam": "ipv4", 00:15:25.225 "trsvcid": "$NVMF_PORT", 00:15:25.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:25.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:25.225 "hdgst": ${hdgst:-false}, 00:15:25.225 "ddgst": ${ddgst:-false} 00:15:25.225 }, 00:15:25.225 "method": "bdev_nvme_attach_controller" 00:15:25.225 } 00:15:25.225 EOF 00:15:25.225 )") 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1038357 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1038361 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:25.225 { 00:15:25.225 "params": { 00:15:25.225 "name": "Nvme$subsystem", 00:15:25.225 "trtype": "$TEST_TRANSPORT", 00:15:25.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:25.225 "adrfam": "ipv4", 00:15:25.225 "trsvcid": "$NVMF_PORT", 00:15:25.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:25.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:25.225 "hdgst": ${hdgst:-false}, 00:15:25.225 "ddgst": ${ddgst:-false} 00:15:25.225 }, 00:15:25.225 "method": "bdev_nvme_attach_controller" 00:15:25.225 } 00:15:25.225 EOF 00:15:25.225 )") 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:25.225 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:25.225 { 00:15:25.225 "params": { 00:15:25.225 "name": "Nvme$subsystem", 00:15:25.225 "trtype": "$TEST_TRANSPORT", 00:15:25.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:25.225 "adrfam": "ipv4", 00:15:25.225 "trsvcid": "$NVMF_PORT", 00:15:25.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:25.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:25.225 "hdgst": ${hdgst:-false}, 00:15:25.225 "ddgst": ${ddgst:-false} 00:15:25.225 }, 00:15:25.225 "method": "bdev_nvme_attach_controller" 00:15:25.225 } 00:15:25.225 EOF 00:15:25.225 )") 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:25.226 { 00:15:25.226 "params": { 00:15:25.226 "name": "Nvme$subsystem", 00:15:25.226 "trtype": "$TEST_TRANSPORT", 00:15:25.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:25.226 "adrfam": "ipv4", 00:15:25.226 "trsvcid": "$NVMF_PORT", 00:15:25.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:25.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:25.226 "hdgst": ${hdgst:-false}, 00:15:25.226 "ddgst": ${ddgst:-false} 00:15:25.226 }, 00:15:25.226 "method": "bdev_nvme_attach_controller" 00:15:25.226 } 00:15:25.226 EOF 00:15:25.226 )") 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1038352 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:25.226 "params": { 00:15:25.226 "name": "Nvme1", 00:15:25.226 "trtype": "tcp", 00:15:25.226 "traddr": "10.0.0.2", 00:15:25.226 "adrfam": "ipv4", 00:15:25.226 "trsvcid": "4420", 00:15:25.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.226 "hdgst": false, 00:15:25.226 "ddgst": false 00:15:25.226 }, 00:15:25.226 "method": "bdev_nvme_attach_controller" 00:15:25.226 }' 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:25.226 "params": { 00:15:25.226 "name": "Nvme1", 00:15:25.226 "trtype": "tcp", 00:15:25.226 "traddr": "10.0.0.2", 00:15:25.226 "adrfam": "ipv4", 00:15:25.226 "trsvcid": "4420", 00:15:25.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.226 "hdgst": false, 00:15:25.226 "ddgst": false 00:15:25.226 }, 00:15:25.226 "method": "bdev_nvme_attach_controller" 00:15:25.226 }' 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:25.226 "params": { 00:15:25.226 "name": "Nvme1", 00:15:25.226 "trtype": "tcp", 00:15:25.226 "traddr": "10.0.0.2", 00:15:25.226 "adrfam": "ipv4", 00:15:25.226 "trsvcid": "4420", 00:15:25.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.226 "hdgst": false, 00:15:25.226 "ddgst": false 00:15:25.226 }, 00:15:25.226 "method": "bdev_nvme_attach_controller" 00:15:25.226 }' 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:25.226 13:45:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:25.226 "params": { 00:15:25.226 "name": "Nvme1", 00:15:25.226 "trtype": "tcp", 00:15:25.226 "traddr": "10.0.0.2", 00:15:25.226 "adrfam": "ipv4", 00:15:25.226 "trsvcid": "4420", 00:15:25.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.226 "hdgst": false, 00:15:25.226 "ddgst": false 00:15:25.226 }, 00:15:25.226 "method": "bdev_nvme_attach_controller" 00:15:25.226 }' 00:15:25.226 [2024-07-15 13:45:51.624815] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:25.226 [2024-07-15 13:45:51.624870] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:25.226 [2024-07-15 13:45:51.627688] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:25.226 [2024-07-15 13:45:51.627736] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:25.226 [2024-07-15 13:45:51.628144] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:25.226 [2024-07-15 13:45:51.628188] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:25.226 [2024-07-15 13:45:51.628709] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:25.226 [2024-07-15 13:45:51.628754] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:25.226 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.226 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.486 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.486 [2024-07-15 13:45:51.768492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.486 [2024-07-15 13:45:51.810490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.486 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.486 [2024-07-15 13:45:51.820249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:25.486 [2024-07-15 13:45:51.860027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.486 [2024-07-15 13:45:51.861148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:25.486 [2024-07-15 13:45:51.910496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:25.486 [2024-07-15 13:45:51.918346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.486 [2024-07-15 13:45:51.969506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:25.746 Running I/O for 1 seconds... 00:15:25.746 Running I/O for 1 seconds... 00:15:25.746 Running I/O for 1 seconds... 00:15:25.746 Running I/O for 1 seconds... 00:15:26.685 00:15:26.685 Latency(us) 00:15:26.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.685 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:26.685 Nvme1n1 : 1.00 188097.36 734.76 0.00 0.00 677.31 274.77 757.76 00:15:26.685 =================================================================================================================== 00:15:26.685 Total : 188097.36 734.76 0.00 0.00 677.31 274.77 757.76 00:15:26.685 00:15:26.685 Latency(us) 00:15:26.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.685 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:26.685 Nvme1n1 : 1.01 8212.53 32.08 0.00 0.00 15463.50 7318.19 22609.92 00:15:26.685 =================================================================================================================== 00:15:26.685 Total : 8212.53 32.08 0.00 0.00 15463.50 7318.19 22609.92 00:15:26.685 00:15:26.685 Latency(us) 00:15:26.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.685 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:26.685 Nvme1n1 : 1.01 13033.82 50.91 0.00 0.00 9788.60 5461.33 23811.41 00:15:26.685 =================================================================================================================== 00:15:26.685 Total : 13033.82 50.91 0.00 0.00 9788.60 5461.33 23811.41 00:15:26.685 00:15:26.685 Latency(us) 00:15:26.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.685 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:26.685 Nvme1n1 : 1.00 8420.82 32.89 0.00 0.00 15165.08 3932.16 32986.45 00:15:26.685 =================================================================================================================== 00:15:26.685 Total : 8420.82 32.89 0.00 0.00 15165.08 3932.16 32986.45 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1038354 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1038357 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1038361 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:26.944 rmmod nvme_tcp 00:15:26.944 rmmod nvme_fabrics 00:15:26.944 rmmod nvme_keyring 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1038232 ']' 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1038232 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1038232 ']' 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1038232 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:26.944 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1038232 00:15:27.204 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:27.204 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:27.204 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1038232' 00:15:27.204 killing process with pid 1038232 00:15:27.204 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1038232 00:15:27.204 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1038232 00:15:27.204 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:27.204 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:27.204 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:27.204 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:27.204 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:27.204 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.204 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.204 13:45:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.750 13:45:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:29.750 00:15:29.750 real 0m12.579s 00:15:29.750 user 0m18.899s 00:15:29.750 sys 0m6.821s 00:15:29.750 13:45:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:29.750 13:45:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:29.750 ************************************ 00:15:29.750 END TEST nvmf_bdev_io_wait 00:15:29.750 ************************************ 00:15:29.750 13:45:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:29.750 13:45:55 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:29.750 13:45:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:29.750 13:45:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:29.750 13:45:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:29.750 ************************************ 00:15:29.750 START TEST nvmf_queue_depth 00:15:29.750 ************************************ 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:29.750 * Looking for test storage... 00:15:29.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:29.750 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:29.751 13:45:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:29.751 13:45:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:36.334 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:36.334 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.334 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:36.335 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:36.335 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:36.335 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:36.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:15:36.596 00:15:36.596 --- 10.0.0.2 ping statistics --- 00:15:36.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.596 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.417 ms 00:15:36.596 00:15:36.596 --- 10.0.0.1 ping statistics --- 00:15:36.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.596 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1042950 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1042950 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1042950 ']' 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:36.596 13:46:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:36.596 [2024-07-15 13:46:03.051433] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:36.596 [2024-07-15 13:46:03.051486] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.596 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.856 [2024-07-15 13:46:03.134153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.856 [2024-07-15 13:46:03.207915] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.856 [2024-07-15 13:46:03.207964] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.856 [2024-07-15 13:46:03.207973] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.856 [2024-07-15 13:46:03.207979] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.856 [2024-07-15 13:46:03.207985] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.856 [2024-07-15 13:46:03.208009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:37.425 [2024-07-15 13:46:03.870370] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:37.425 Malloc0 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.425 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:37.425 [2024-07-15 13:46:03.947943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.685 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.685 13:46:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1043169 00:15:37.685 13:46:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:37.685 13:46:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:37.685 13:46:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1043169 /var/tmp/bdevperf.sock 00:15:37.685 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1043169 ']' 00:15:37.685 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:37.685 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.685 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:37.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:37.685 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.685 13:46:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:37.685 [2024-07-15 13:46:04.012345] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:37.685 [2024-07-15 13:46:04.012414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043169 ] 00:15:37.685 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.685 [2024-07-15 13:46:04.076571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.685 [2024-07-15 13:46:04.151922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.626 13:46:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.626 13:46:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:38.626 13:46:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:38.626 13:46:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.626 13:46:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:38.626 NVMe0n1 00:15:38.626 13:46:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.626 13:46:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:38.626 Running I/O for 10 seconds... 00:15:48.623 00:15:48.623 Latency(us) 00:15:48.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.623 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:48.623 Verification LBA range: start 0x0 length 0x4000 00:15:48.623 NVMe0n1 : 10.07 11354.81 44.35 0.00 0.00 89815.18 24357.55 69905.07 00:15:48.623 =================================================================================================================== 00:15:48.623 Total : 11354.81 44.35 0.00 0.00 89815.18 24357.55 69905.07 00:15:48.623 0 00:15:48.623 13:46:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1043169 00:15:48.623 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1043169 ']' 00:15:48.623 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1043169 00:15:48.623 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:48.623 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:48.623 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1043169 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1043169' 00:15:48.927 killing process with pid 1043169 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1043169 00:15:48.927 Received shutdown signal, test time was about 10.000000 seconds 00:15:48.927 00:15:48.927 Latency(us) 00:15:48.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.927 =================================================================================================================== 00:15:48.927 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1043169 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:48.927 rmmod nvme_tcp 00:15:48.927 rmmod nvme_fabrics 00:15:48.927 rmmod nvme_keyring 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1042950 ']' 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1042950 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1042950 ']' 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1042950 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1042950 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1042950' 00:15:48.927 killing process with pid 1042950 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1042950 00:15:48.927 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1042950 00:15:49.209 13:46:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:49.209 13:46:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:49.209 13:46:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:49.209 13:46:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:49.209 13:46:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:49.209 13:46:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.209 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.209 13:46:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.121 13:46:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:51.121 00:15:51.121 real 0m21.836s 00:15:51.121 user 0m25.627s 00:15:51.121 sys 0m6.321s 00:15:51.121 13:46:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:51.121 13:46:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:51.121 ************************************ 00:15:51.121 END TEST nvmf_queue_depth 00:15:51.121 ************************************ 00:15:51.382 13:46:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:51.382 13:46:17 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:51.382 13:46:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:51.382 13:46:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:51.382 13:46:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:51.382 ************************************ 00:15:51.382 START TEST nvmf_target_multipath 00:15:51.382 ************************************ 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:51.382 * Looking for test storage... 00:15:51.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.382 13:46:17 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:51.383 13:46:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:59.526 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:59.526 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.526 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:59.526 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:59.527 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:59.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:15:59.527 00:15:59.527 --- 10.0.0.2 ping statistics --- 00:15:59.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.527 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:59.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.368 ms 00:15:59.527 00:15:59.527 --- 10.0.0.1 ping statistics --- 00:15:59.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.527 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:59.527 only one NIC for nvmf test 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:59.527 rmmod nvme_tcp 00:15:59.527 rmmod nvme_fabrics 00:15:59.527 rmmod nvme_keyring 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.527 13:46:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:00.912 13:46:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.913 13:46:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:00.913 00:16:00.913 real 0m9.408s 00:16:00.913 user 0m2.063s 00:16:00.913 sys 0m5.254s 00:16:00.913 13:46:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:00.913 13:46:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:00.913 ************************************ 00:16:00.913 END TEST nvmf_target_multipath 00:16:00.913 ************************************ 00:16:00.913 13:46:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:00.913 13:46:27 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:00.913 13:46:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:00.913 13:46:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:00.913 13:46:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:00.913 ************************************ 00:16:00.913 START TEST nvmf_zcopy 00:16:00.913 ************************************ 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:00.913 * Looking for test storage... 00:16:00.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:00.913 13:46:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:09.056 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:09.057 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:09.057 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:09.057 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:09.057 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:09.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:16:09.057 00:16:09.057 --- 10.0.0.2 ping statistics --- 00:16:09.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.057 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:09.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:16:09.057 00:16:09.057 --- 10.0.0.1 ping statistics --- 00:16:09.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.057 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1053621 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1053621 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1053621 ']' 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.057 13:46:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.057 [2024-07-15 13:46:34.504215] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:09.057 [2024-07-15 13:46:34.504303] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.057 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.057 [2024-07-15 13:46:34.594443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.057 [2024-07-15 13:46:34.688073] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.057 [2024-07-15 13:46:34.688135] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.057 [2024-07-15 13:46:34.688144] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.057 [2024-07-15 13:46:34.688151] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.057 [2024-07-15 13:46:34.688157] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.058 [2024-07-15 13:46:34.688181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.058 [2024-07-15 13:46:35.327489] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.058 [2024-07-15 13:46:35.351683] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.058 malloc0 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:09.058 { 00:16:09.058 "params": { 00:16:09.058 "name": "Nvme$subsystem", 00:16:09.058 "trtype": "$TEST_TRANSPORT", 00:16:09.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:09.058 "adrfam": "ipv4", 00:16:09.058 "trsvcid": "$NVMF_PORT", 00:16:09.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:09.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:09.058 "hdgst": ${hdgst:-false}, 00:16:09.058 "ddgst": ${ddgst:-false} 00:16:09.058 }, 00:16:09.058 "method": "bdev_nvme_attach_controller" 00:16:09.058 } 00:16:09.058 EOF 00:16:09.058 )") 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:09.058 13:46:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:09.058 "params": { 00:16:09.058 "name": "Nvme1", 00:16:09.058 "trtype": "tcp", 00:16:09.058 "traddr": "10.0.0.2", 00:16:09.058 "adrfam": "ipv4", 00:16:09.058 "trsvcid": "4420", 00:16:09.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:09.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:09.058 "hdgst": false, 00:16:09.058 "ddgst": false 00:16:09.058 }, 00:16:09.058 "method": "bdev_nvme_attach_controller" 00:16:09.058 }' 00:16:09.058 [2024-07-15 13:46:35.450107] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:09.058 [2024-07-15 13:46:35.450180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1053915 ] 00:16:09.058 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.058 [2024-07-15 13:46:35.514155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.318 [2024-07-15 13:46:35.587988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.579 Running I/O for 10 seconds... 00:16:19.579 00:16:19.579 Latency(us) 00:16:19.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.579 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:19.579 Verification LBA range: start 0x0 length 0x1000 00:16:19.579 Nvme1n1 : 10.01 8850.97 69.15 0.00 0.00 14409.62 2252.80 34297.17 00:16:19.579 =================================================================================================================== 00:16:19.579 Total : 8850.97 69.15 0.00 0.00 14409.62 2252.80 34297.17 00:16:19.579 13:46:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1055971 00:16:19.579 13:46:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:19.579 13:46:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.579 13:46:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:19.579 13:46:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:19.579 13:46:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:19.579 13:46:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:19.579 13:46:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:19.579 13:46:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:19.579 { 00:16:19.579 "params": { 00:16:19.579 "name": "Nvme$subsystem", 00:16:19.579 "trtype": "$TEST_TRANSPORT", 00:16:19.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:19.579 "adrfam": "ipv4", 00:16:19.579 "trsvcid": "$NVMF_PORT", 00:16:19.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:19.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:19.579 "hdgst": ${hdgst:-false}, 00:16:19.579 "ddgst": ${ddgst:-false} 00:16:19.579 }, 00:16:19.579 "method": "bdev_nvme_attach_controller" 00:16:19.579 } 00:16:19.579 EOF 00:16:19.579 )") 00:16:19.579 13:46:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:19.579 [2024-07-15 13:46:46.023873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.579 [2024-07-15 13:46:46.023901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.579 13:46:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:19.579 13:46:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:19.579 13:46:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:19.579 "params": { 00:16:19.579 "name": "Nvme1", 00:16:19.579 "trtype": "tcp", 00:16:19.579 "traddr": "10.0.0.2", 00:16:19.579 "adrfam": "ipv4", 00:16:19.579 "trsvcid": "4420", 00:16:19.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:19.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:19.579 "hdgst": false, 00:16:19.579 "ddgst": false 00:16:19.579 }, 00:16:19.579 "method": "bdev_nvme_attach_controller" 00:16:19.579 }' 00:16:19.579 [2024-07-15 13:46:46.035868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.579 [2024-07-15 13:46:46.035877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.579 [2024-07-15 13:46:46.047898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.580 [2024-07-15 13:46:46.047905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.580 [2024-07-15 13:46:46.059927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.580 [2024-07-15 13:46:46.059934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.580 [2024-07-15 13:46:46.064820] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:19.580 [2024-07-15 13:46:46.064867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1055971 ] 00:16:19.580 [2024-07-15 13:46:46.071959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.580 [2024-07-15 13:46:46.071966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.580 [2024-07-15 13:46:46.083990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.580 [2024-07-15 13:46:46.083998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.580 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.580 [2024-07-15 13:46:46.096021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.580 [2024-07-15 13:46:46.096029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.108053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.108061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.120082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.120089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.122503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.841 [2024-07-15 13:46:46.132114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.132124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.144147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.144155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.156178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.156188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.168207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.168216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.180237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.180245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.186868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.841 [2024-07-15 13:46:46.192267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.192275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.204303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.204315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.216333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.216345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.228363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.228371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.240396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.240404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.252425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.252434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.264462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.264475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.276489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.276500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.288522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.288536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.300549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.300556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.312579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.312587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.324612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.324620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.336645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.841 [2024-07-15 13:46:46.336654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.841 [2024-07-15 13:46:46.348676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.842 [2024-07-15 13:46:46.348685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.842 [2024-07-15 13:46:46.360706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.842 [2024-07-15 13:46:46.360714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.125 [2024-07-15 13:46:46.372737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.125 [2024-07-15 13:46:46.372745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.125 [2024-07-15 13:46:46.384768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.125 [2024-07-15 13:46:46.384776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.125 [2024-07-15 13:46:46.396800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.125 [2024-07-15 13:46:46.396809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.125 [2024-07-15 13:46:46.408830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.125 [2024-07-15 13:46:46.408838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.125 [2024-07-15 13:46:46.420860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.125 [2024-07-15 13:46:46.420868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.125 [2024-07-15 13:46:46.432891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.125 [2024-07-15 13:46:46.432901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.125 [2024-07-15 13:46:46.444922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.125 [2024-07-15 13:46:46.444930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.125 [2024-07-15 13:46:46.456953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.125 [2024-07-15 13:46:46.456961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.125 [2024-07-15 13:46:46.468984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.125 [2024-07-15 13:46:46.468992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.125 [2024-07-15 13:46:46.481016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.125 [2024-07-15 13:46:46.481025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.125 [2024-07-15 13:46:46.493053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.125 [2024-07-15 13:46:46.493067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.125 Running I/O for 5 seconds... 00:16:20.125 [2024-07-15 13:46:46.510905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.125 [2024-07-15 13:46:46.510921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.126 [2024-07-15 13:46:46.524453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.126 [2024-07-15 13:46:46.524475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.126 [2024-07-15 13:46:46.537958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.126 [2024-07-15 13:46:46.537974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.126 [2024-07-15 13:46:46.550775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.126 [2024-07-15 13:46:46.550790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.126 [2024-07-15 13:46:46.563980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.126 [2024-07-15 13:46:46.563995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.126 [2024-07-15 13:46:46.576452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.126 [2024-07-15 13:46:46.576467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.126 [2024-07-15 13:46:46.589937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.126 [2024-07-15 13:46:46.589953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.126 [2024-07-15 13:46:46.602894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.126 [2024-07-15 13:46:46.602909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.126 [2024-07-15 13:46:46.616346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.126 [2024-07-15 13:46:46.616362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.126 [2024-07-15 13:46:46.628467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.126 [2024-07-15 13:46:46.628482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.126 [2024-07-15 13:46:46.641609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.126 [2024-07-15 13:46:46.641625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.654435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.654450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.667764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.667780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.680680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.680695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.694072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.694087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.707193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.707207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.719599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.719614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.732718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.732732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.745499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.745513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.758833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.758847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.771635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.771652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.784886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.784901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.797859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.797874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.810678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.810693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.823849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.823864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.837086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.837103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.849823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.849838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.863113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.863133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.876010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.876025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.889338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.889353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.389 [2024-07-15 13:46:46.902551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.389 [2024-07-15 13:46:46.902567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.650 [2024-07-15 13:46:46.915478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.650 [2024-07-15 13:46:46.915494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.650 [2024-07-15 13:46:46.928493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.650 [2024-07-15 13:46:46.928509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.650 [2024-07-15 13:46:46.941803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.650 [2024-07-15 13:46:46.941818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.650 [2024-07-15 13:46:46.954281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.650 [2024-07-15 13:46:46.954296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.650 [2024-07-15 13:46:46.967478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.650 [2024-07-15 13:46:46.967492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.650 [2024-07-15 13:46:46.980887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.650 [2024-07-15 13:46:46.980902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.650 [2024-07-15 13:46:46.993887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.650 [2024-07-15 13:46:46.993902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.650 [2024-07-15 13:46:47.007080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.651 [2024-07-15 13:46:47.007095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.651 [2024-07-15 13:46:47.020117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.651 [2024-07-15 13:46:47.020137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.651 [2024-07-15 13:46:47.032824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.651 [2024-07-15 13:46:47.032840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.651 [2024-07-15 13:46:47.045645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.651 [2024-07-15 13:46:47.045660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.651 [2024-07-15 13:46:47.058463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.651 [2024-07-15 13:46:47.058478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.651 [2024-07-15 13:46:47.071415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.651 [2024-07-15 13:46:47.071430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.651 [2024-07-15 13:46:47.084358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.651 [2024-07-15 13:46:47.084374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.651 [2024-07-15 13:46:47.097109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.651 [2024-07-15 13:46:47.097128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.651 [2024-07-15 13:46:47.110299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.651 [2024-07-15 13:46:47.110314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.651 [2024-07-15 13:46:47.123461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.651 [2024-07-15 13:46:47.123476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.651 [2024-07-15 13:46:47.136156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.651 [2024-07-15 13:46:47.136172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.651 [2024-07-15 13:46:47.149585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.651 [2024-07-15 13:46:47.149600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.651 [2024-07-15 13:46:47.162563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.651 [2024-07-15 13:46:47.162578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.912 [2024-07-15 13:46:47.175872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.912 [2024-07-15 13:46:47.175888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.912 [2024-07-15 13:46:47.188181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.912 [2024-07-15 13:46:47.188197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.912 [2024-07-15 13:46:47.200787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.912 [2024-07-15 13:46:47.200802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.912 [2024-07-15 13:46:47.214338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.912 [2024-07-15 13:46:47.214354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.912 [2024-07-15 13:46:47.227229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.912 [2024-07-15 13:46:47.227244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.912 [2024-07-15 13:46:47.240267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.912 [2024-07-15 13:46:47.240282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.912 [2024-07-15 13:46:47.253391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.912 [2024-07-15 13:46:47.253407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.912 [2024-07-15 13:46:47.266970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.912 [2024-07-15 13:46:47.266984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.912 [2024-07-15 13:46:47.279126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.912 [2024-07-15 13:46:47.279141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.912 [2024-07-15 13:46:47.292115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.912 [2024-07-15 13:46:47.292137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.912 [2024-07-15 13:46:47.305464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.912 [2024-07-15 13:46:47.305479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.912 [2024-07-15 13:46:47.318647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.912 [2024-07-15 13:46:47.318662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.912 [2024-07-15 13:46:47.331597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.912 [2024-07-15 13:46:47.331612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.912 [2024-07-15 13:46:47.344594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.912 [2024-07-15 13:46:47.344610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.912 [2024-07-15 13:46:47.357964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.912 [2024-07-15 13:46:47.357979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.912 [2024-07-15 13:46:47.370730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.912 [2024-07-15 13:46:47.370745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.912 [2024-07-15 13:46:47.384036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.912 [2024-07-15 13:46:47.384051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.912 [2024-07-15 13:46:47.397099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.912 [2024-07-15 13:46:47.397114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.912 [2024-07-15 13:46:47.410288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.912 [2024-07-15 13:46:47.410303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.913 [2024-07-15 13:46:47.422921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.913 [2024-07-15 13:46:47.422936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.913 [2024-07-15 13:46:47.435847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.913 [2024-07-15 13:46:47.435862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.449127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.449142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.462069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.462084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.475064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.475079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.488044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.488058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.500227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.500241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.513273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.513288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.526586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.526601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.539770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.539784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.553000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.553015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.566299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.566314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.579651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.579666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.592915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.592930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.605937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.605952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.618858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.618874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.632162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.632177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.644413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.644427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.657583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.657598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.670818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.670832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.684330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.684346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.174 [2024-07-15 13:46:47.696595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.174 [2024-07-15 13:46:47.696610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.709597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.709612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.723020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.723035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.736133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.736149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.749637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.749656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.762339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.762353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.775507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.775522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.788254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.788270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.800869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.800884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.814136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.814152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.827438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.827452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.840334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.840349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.853441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.853456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.866038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.866053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.879383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.879398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.892385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.892400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.905401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.905415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.918885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.918899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.931559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.931574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.944736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.944751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.452 [2024-07-15 13:46:47.957811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.452 [2024-07-15 13:46:47.957826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.734 [2024-07-15 13:46:47.970859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.734 [2024-07-15 13:46:47.970876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.734 [2024-07-15 13:46:47.983827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.734 [2024-07-15 13:46:47.983842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.734 [2024-07-15 13:46:47.996806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.734 [2024-07-15 13:46:47.996825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.734 [2024-07-15 13:46:48.010162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.734 [2024-07-15 13:46:48.010177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.734 [2024-07-15 13:46:48.023467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.734 [2024-07-15 13:46:48.023481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.734 [2024-07-15 13:46:48.036082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.734 [2024-07-15 13:46:48.036097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.734 [2024-07-15 13:46:48.049391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.734 [2024-07-15 13:46:48.049406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.734 [2024-07-15 13:46:48.062063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.734 [2024-07-15 13:46:48.062078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.734 [2024-07-15 13:46:48.075336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.734 [2024-07-15 13:46:48.075351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.734 [2024-07-15 13:46:48.088577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.734 [2024-07-15 13:46:48.088591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.734 [2024-07-15 13:46:48.101827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.734 [2024-07-15 13:46:48.101841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.734 [2024-07-15 13:46:48.114199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.734 [2024-07-15 13:46:48.114214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.734 [2024-07-15 13:46:48.126655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.734 [2024-07-15 13:46:48.126670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.734 [2024-07-15 13:46:48.139956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.734 [2024-07-15 13:46:48.139970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.734 [2024-07-15 13:46:48.152881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.734 [2024-07-15 13:46:48.152896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.734 [2024-07-15 13:46:48.166018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.734 [2024-07-15 13:46:48.166033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.735 [2024-07-15 13:46:48.179280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.735 [2024-07-15 13:46:48.179294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.735 [2024-07-15 13:46:48.192233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.735 [2024-07-15 13:46:48.192248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.735 [2024-07-15 13:46:48.205579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.735 [2024-07-15 13:46:48.205594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.735 [2024-07-15 13:46:48.218786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.735 [2024-07-15 13:46:48.218800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.735 [2024-07-15 13:46:48.232110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.735 [2024-07-15 13:46:48.232128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.735 [2024-07-15 13:46:48.244497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.735 [2024-07-15 13:46:48.244515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.735 [2024-07-15 13:46:48.257007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.735 [2024-07-15 13:46:48.257022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.270463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.270479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.282834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.282849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.295543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.295558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.308885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.308900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.321255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.321270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.333879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.333894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.346666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.346680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.359127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.359141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.372116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.372136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.384171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.384186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.397185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.397200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.410410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.410424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.422744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.422759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.435800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.435815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.448874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.448889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.461863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.461877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.475141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.475156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.488273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.488292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.500521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.500535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.995 [2024-07-15 13:46:48.513597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.995 [2024-07-15 13:46:48.513612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.257 [2024-07-15 13:46:48.526783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.257 [2024-07-15 13:46:48.526799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.257 [2024-07-15 13:46:48.539800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.257 [2024-07-15 13:46:48.539815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.257 [2024-07-15 13:46:48.552928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.257 [2024-07-15 13:46:48.552943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.257 [2024-07-15 13:46:48.566216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.257 [2024-07-15 13:46:48.566231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.257 [2024-07-15 13:46:48.579560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.257 [2024-07-15 13:46:48.579575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.257 [2024-07-15 13:46:48.592560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.257 [2024-07-15 13:46:48.592575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.257 [2024-07-15 13:46:48.605601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.257 [2024-07-15 13:46:48.605615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.257 [2024-07-15 13:46:48.618958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.257 [2024-07-15 13:46:48.618973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.257 [2024-07-15 13:46:48.631510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.257 [2024-07-15 13:46:48.631526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.257 [2024-07-15 13:46:48.644645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.257 [2024-07-15 13:46:48.644660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.257 [2024-07-15 13:46:48.657444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.257 [2024-07-15 13:46:48.657459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.257 [2024-07-15 13:46:48.670492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.257 [2024-07-15 13:46:48.670507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.257 [2024-07-15 13:46:48.683952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.257 [2024-07-15 13:46:48.683967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.257 [2024-07-15 13:46:48.696368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.257 [2024-07-15 13:46:48.696383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.257 [2024-07-15 13:46:48.708821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.257 [2024-07-15 13:46:48.708837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.257 [2024-07-15 13:46:48.721795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.257 [2024-07-15 13:46:48.721810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.257 [2024-07-15 13:46:48.734747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.257 [2024-07-15 13:46:48.734762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.257 [2024-07-15 13:46:48.747752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.257 [2024-07-15 13:46:48.747767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.257 [2024-07-15 13:46:48.760651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.257 [2024-07-15 13:46:48.760666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.258 [2024-07-15 13:46:48.773866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.258 [2024-07-15 13:46:48.773882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:48.786699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:48.786714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:48.800023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:48.800038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:48.813331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:48.813347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:48.826430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:48.826445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:48.839400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:48.839415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:48.852112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:48.852133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:48.865117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:48.865138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:48.878020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:48.878035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:48.891407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:48.891422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:48.903865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:48.903880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:48.916529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:48.916545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:48.929220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:48.929236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:48.942190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:48.942205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:48.954938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:48.954954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:48.968147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:48.968163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:48.981635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:48.981650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:48.994624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:48.994639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:49.007771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:49.007786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:49.020310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:49.020326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.519 [2024-07-15 13:46:49.033505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.519 [2024-07-15 13:46:49.033520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.779 [2024-07-15 13:46:49.046443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.779 [2024-07-15 13:46:49.046458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.779 [2024-07-15 13:46:49.059558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.779 [2024-07-15 13:46:49.059573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.779 [2024-07-15 13:46:49.071797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.779 [2024-07-15 13:46:49.071812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.779 [2024-07-15 13:46:49.084919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.779 [2024-07-15 13:46:49.084935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.779 [2024-07-15 13:46:49.098029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.779 [2024-07-15 13:46:49.098044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.779 [2024-07-15 13:46:49.111252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.779 [2024-07-15 13:46:49.111267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.780 [2024-07-15 13:46:49.124083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.780 [2024-07-15 13:46:49.124098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.780 [2024-07-15 13:46:49.137386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.780 [2024-07-15 13:46:49.137401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.780 [2024-07-15 13:46:49.150326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.780 [2024-07-15 13:46:49.150341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.780 [2024-07-15 13:46:49.163363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.780 [2024-07-15 13:46:49.163378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.780 [2024-07-15 13:46:49.176508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.780 [2024-07-15 13:46:49.176523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.780 [2024-07-15 13:46:49.189786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.780 [2024-07-15 13:46:49.189801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.780 [2024-07-15 13:46:49.203158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.780 [2024-07-15 13:46:49.203173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.780 [2024-07-15 13:46:49.215678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.780 [2024-07-15 13:46:49.215693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.780 [2024-07-15 13:46:49.228755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.780 [2024-07-15 13:46:49.228769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.780 [2024-07-15 13:46:49.241590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.780 [2024-07-15 13:46:49.241604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.780 [2024-07-15 13:46:49.254948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.780 [2024-07-15 13:46:49.254962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.780 [2024-07-15 13:46:49.267874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.780 [2024-07-15 13:46:49.267889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.780 [2024-07-15 13:46:49.281010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.780 [2024-07-15 13:46:49.281025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.780 [2024-07-15 13:46:49.294386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.780 [2024-07-15 13:46:49.294400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.307574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.307589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.320225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.320240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.333229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.333243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.346172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.346186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.359486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.359501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.372634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.372649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.385796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.385811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.398724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.398739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.412032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.412047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.425166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.425183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.438051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.438065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.451294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.451309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.464012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.464027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.477203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.477218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.489784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.489799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.502686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.502701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.516004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.516019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.528202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.528216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.541476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.541490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.040 [2024-07-15 13:46:49.554157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.040 [2024-07-15 13:46:49.554171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.566614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.566628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.579854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.579869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.592352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.592367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.605135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.605149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.617415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.617430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.629738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.629752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.643199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.643214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.656340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.656356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.669413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.669428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.682736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.682751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.695117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.695137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.708397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.708415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.721442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.721457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.734248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.734263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.747440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.747454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.760423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.760437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.773408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.773422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.786296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.786310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.799791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.799806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.812500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.812514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.301 [2024-07-15 13:46:49.825683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.301 [2024-07-15 13:46:49.825697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.561 [2024-07-15 13:46:49.839044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.561 [2024-07-15 13:46:49.839059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.561 [2024-07-15 13:46:49.851630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.561 [2024-07-15 13:46:49.851644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.561 [2024-07-15 13:46:49.864509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.561 [2024-07-15 13:46:49.864524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.561 [2024-07-15 13:46:49.877491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.561 [2024-07-15 13:46:49.877506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.561 [2024-07-15 13:46:49.890833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.561 [2024-07-15 13:46:49.890848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.561 [2024-07-15 13:46:49.903217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.561 [2024-07-15 13:46:49.903232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.561 [2024-07-15 13:46:49.916382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.561 [2024-07-15 13:46:49.916397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.561 [2024-07-15 13:46:49.929849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.561 [2024-07-15 13:46:49.929863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.561 [2024-07-15 13:46:49.942488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.561 [2024-07-15 13:46:49.942503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.561 [2024-07-15 13:46:49.955394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.561 [2024-07-15 13:46:49.955413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.561 [2024-07-15 13:46:49.968656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.561 [2024-07-15 13:46:49.968671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.561 [2024-07-15 13:46:49.981897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.561 [2024-07-15 13:46:49.981913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.561 [2024-07-15 13:46:49.994534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.561 [2024-07-15 13:46:49.994550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.561 [2024-07-15 13:46:50.008026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.561 [2024-07-15 13:46:50.008042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.561 [2024-07-15 13:46:50.020920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.561 [2024-07-15 13:46:50.020936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.561 [2024-07-15 13:46:50.036693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.561 [2024-07-15 13:46:50.036710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.561 [2024-07-15 13:46:50.050745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.561 [2024-07-15 13:46:50.050760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.561 [2024-07-15 13:46:50.063833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.561 [2024-07-15 13:46:50.063847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.561 [2024-07-15 13:46:50.077396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.561 [2024-07-15 13:46:50.077411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.090359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.090374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.102358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.102373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.115753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.115767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.128965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.128980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.141831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.141847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.155094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.155109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.168157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.168172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.181429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.181444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.194475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.194490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.207800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.207823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.220062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.220077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.232946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.232962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.246224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.246239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.259236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.259251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.272690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.272706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.285034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.285050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.297351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.297366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.310071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.310086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.323189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.323204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.822 [2024-07-15 13:46:50.335473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.822 [2024-07-15 13:46:50.335489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.348357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.348372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.361497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.361513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.373861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.373876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.387096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.387111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.400478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.400494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.413407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.413422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.426369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.426383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.439362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.439377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.452645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.452664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.465951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.465968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.479303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.479319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.492365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.492380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.505455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.505470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.518368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.518383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.531387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.531403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.544677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.544693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.557615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.557630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.570969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.570984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.583828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.583843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.083 [2024-07-15 13:46:50.596831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.083 [2024-07-15 13:46:50.596846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.344 [2024-07-15 13:46:50.609925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.344 [2024-07-15 13:46:50.609940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.344 [2024-07-15 13:46:50.623246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.344 [2024-07-15 13:46:50.623262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.344 [2024-07-15 13:46:50.635768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.344 [2024-07-15 13:46:50.635782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.344 [2024-07-15 13:46:50.648919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.344 [2024-07-15 13:46:50.648934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.344 [2024-07-15 13:46:50.662341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.344 [2024-07-15 13:46:50.662356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.344 [2024-07-15 13:46:50.675493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.344 [2024-07-15 13:46:50.675509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.344 [2024-07-15 13:46:50.688615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.344 [2024-07-15 13:46:50.688631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.344 [2024-07-15 13:46:50.701946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.344 [2024-07-15 13:46:50.701962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.344 [2024-07-15 13:46:50.714738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.344 [2024-07-15 13:46:50.714753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.344 [2024-07-15 13:46:50.728003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.344 [2024-07-15 13:46:50.728019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.344 [2024-07-15 13:46:50.740228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.344 [2024-07-15 13:46:50.740243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.344 [2024-07-15 13:46:50.753619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.344 [2024-07-15 13:46:50.753634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.344 [2024-07-15 13:46:50.766999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.344 [2024-07-15 13:46:50.767014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.344 [2024-07-15 13:46:50.780325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.344 [2024-07-15 13:46:50.780340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.344 [2024-07-15 13:46:50.793435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.344 [2024-07-15 13:46:50.793450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.344 [2024-07-15 13:46:50.805838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.344 [2024-07-15 13:46:50.805853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.344 [2024-07-15 13:46:50.819338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.344 [2024-07-15 13:46:50.819355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.345 [2024-07-15 13:46:50.832318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.345 [2024-07-15 13:46:50.832333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.345 [2024-07-15 13:46:50.845401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.345 [2024-07-15 13:46:50.845416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.345 [2024-07-15 13:46:50.858633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.345 [2024-07-15 13:46:50.858649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.605 [2024-07-15 13:46:50.871649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.605 [2024-07-15 13:46:50.871664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.605 [2024-07-15 13:46:50.884813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.605 [2024-07-15 13:46:50.884829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.605 [2024-07-15 13:46:50.897750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.605 [2024-07-15 13:46:50.897765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.605 [2024-07-15 13:46:50.910794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.605 [2024-07-15 13:46:50.910809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.605 [2024-07-15 13:46:50.923698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.605 [2024-07-15 13:46:50.923713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.605 [2024-07-15 13:46:50.936783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.605 [2024-07-15 13:46:50.936798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.605 [2024-07-15 13:46:50.950148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.605 [2024-07-15 13:46:50.950163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.605 [2024-07-15 13:46:50.962918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.605 [2024-07-15 13:46:50.962933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.605 [2024-07-15 13:46:50.976289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.605 [2024-07-15 13:46:50.976304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.605 [2024-07-15 13:46:50.988741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.605 [2024-07-15 13:46:50.988757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.605 [2024-07-15 13:46:51.002060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.606 [2024-07-15 13:46:51.002075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.606 [2024-07-15 13:46:51.015177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.606 [2024-07-15 13:46:51.015191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.606 [2024-07-15 13:46:51.027963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.606 [2024-07-15 13:46:51.027978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.606 [2024-07-15 13:46:51.040855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.606 [2024-07-15 13:46:51.040870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.606 [2024-07-15 13:46:51.054177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.606 [2024-07-15 13:46:51.054191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.606 [2024-07-15 13:46:51.067557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.606 [2024-07-15 13:46:51.067572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.606 [2024-07-15 13:46:51.079751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.606 [2024-07-15 13:46:51.079765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.606 [2024-07-15 13:46:51.091873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.606 [2024-07-15 13:46:51.091889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.606 [2024-07-15 13:46:51.104933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.606 [2024-07-15 13:46:51.104948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.606 [2024-07-15 13:46:51.118210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.606 [2024-07-15 13:46:51.118225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.131050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.131065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.144047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.144062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.157129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.157144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.170018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.170033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.182230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.182245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.195436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.195451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.208596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.208610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.220999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.221013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.234203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.234218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.247231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.247245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.260553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.260567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.273362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.273377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.286527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.286542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.299887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.299902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.312385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.312400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.325330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.325344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.338662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.338677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.351805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.351820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.364812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.364827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.378132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.378147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.867 [2024-07-15 13:46:51.390916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:24.867 [2024-07-15 13:46:51.390931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.127 [2024-07-15 13:46:51.403804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.403819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 [2024-07-15 13:46:51.417110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.417129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 [2024-07-15 13:46:51.430118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.430137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 [2024-07-15 13:46:51.443520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.443534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 [2024-07-15 13:46:51.456445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.456459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 [2024-07-15 13:46:51.469559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.469574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 [2024-07-15 13:46:51.482676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.482690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 [2024-07-15 13:46:51.495747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.495762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 [2024-07-15 13:46:51.508663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.508677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 00:16:25.128 Latency(us) 00:16:25.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.128 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:25.128 Nvme1n1 : 5.01 19595.51 153.09 0.00 0.00 6525.48 2703.36 18459.31 00:16:25.128 =================================================================================================================== 00:16:25.128 Total : 19595.51 153.09 0.00 0.00 6525.48 2703.36 18459.31 00:16:25.128 [2024-07-15 13:46:51.518519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.518533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 [2024-07-15 13:46:51.530545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.530557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 [2024-07-15 13:46:51.542585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.542595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 [2024-07-15 13:46:51.554612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.554624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 [2024-07-15 13:46:51.566637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.566646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 [2024-07-15 13:46:51.578665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.578675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 [2024-07-15 13:46:51.590696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.590704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 [2024-07-15 13:46:51.602727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.602736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 [2024-07-15 13:46:51.614759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.614768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 [2024-07-15 13:46:51.626789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.626804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 [2024-07-15 13:46:51.638818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.638826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.128 [2024-07-15 13:46:51.650848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.128 [2024-07-15 13:46:51.650855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1055971) - No such process 00:16:25.388 13:46:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1055971 00:16:25.388 13:46:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:25.388 13:46:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.388 13:46:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:25.388 13:46:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.388 13:46:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:25.388 13:46:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.388 13:46:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:25.388 delay0 00:16:25.388 13:46:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.388 13:46:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:25.388 13:46:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.388 13:46:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:25.389 13:46:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.389 13:46:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:25.389 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.389 [2024-07-15 13:46:51.802759] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:31.969 Initializing NVMe Controllers 00:16:31.969 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:31.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:31.969 Initialization complete. Launching workers. 00:16:31.969 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 114 00:16:31.969 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 401, failed to submit 33 00:16:31.969 success 202, unsuccess 199, failed 0 00:16:31.969 13:46:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:31.969 13:46:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:31.969 13:46:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:31.969 13:46:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:31.969 13:46:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:31.969 13:46:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:31.969 13:46:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:31.969 13:46:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:31.969 rmmod nvme_tcp 00:16:31.969 rmmod nvme_fabrics 00:16:31.969 rmmod nvme_keyring 00:16:31.969 13:46:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:31.969 13:46:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:31.969 13:46:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:31.969 13:46:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1053621 ']' 00:16:31.969 13:46:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1053621 00:16:31.969 13:46:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1053621 ']' 00:16:31.969 13:46:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1053621 00:16:31.969 13:46:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:31.969 13:46:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:31.969 13:46:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1053621 00:16:31.969 13:46:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:31.969 13:46:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:31.969 13:46:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1053621' 00:16:31.969 killing process with pid 1053621 00:16:31.969 13:46:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1053621 00:16:31.969 13:46:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1053621 00:16:31.969 13:46:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:31.969 13:46:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:31.969 13:46:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:31.969 13:46:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:31.969 13:46:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:31.969 13:46:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.969 13:46:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.969 13:46:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.881 13:47:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:33.881 00:16:33.881 real 0m33.015s 00:16:33.881 user 0m44.981s 00:16:33.881 sys 0m9.825s 00:16:33.881 13:47:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:33.881 13:47:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:33.881 ************************************ 00:16:33.881 END TEST nvmf_zcopy 00:16:33.881 ************************************ 00:16:33.881 13:47:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:33.881 13:47:00 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:33.881 13:47:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:33.881 13:47:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:33.881 13:47:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:33.881 ************************************ 00:16:33.881 START TEST nvmf_nmic 00:16:33.881 ************************************ 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:33.881 * Looking for test storage... 00:16:33.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:33.881 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:33.882 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.882 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.882 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.882 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:33.882 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:33.882 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:33.882 13:47:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:33.882 13:47:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:33.882 13:47:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:33.882 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:33.882 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.882 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:33.882 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:33.882 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:33.882 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.882 13:47:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.882 13:47:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.143 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:34.143 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:34.143 13:47:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:34.143 13:47:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:40.730 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:40.730 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:40.730 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:40.730 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:40.730 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:40.731 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:40.731 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:40.731 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:40.731 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.731 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:40.992 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:40.992 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:40.992 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:40.992 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:40.992 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:40.992 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:40.992 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:40.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:16:40.992 00:16:40.992 --- 10.0.0.2 ping statistics --- 00:16:40.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.992 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:16:40.992 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:41.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:16:41.251 00:16:41.251 --- 10.0.0.1 ping statistics --- 00:16:41.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.251 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:16:41.251 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1062320 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1062320 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1062320 ']' 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.252 13:47:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:41.252 [2024-07-15 13:47:07.617428] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:41.252 [2024-07-15 13:47:07.617495] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.252 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.252 [2024-07-15 13:47:07.691413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:41.252 [2024-07-15 13:47:07.769888] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.252 [2024-07-15 13:47:07.769930] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.252 [2024-07-15 13:47:07.769938] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.252 [2024-07-15 13:47:07.769945] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.252 [2024-07-15 13:47:07.769950] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.252 [2024-07-15 13:47:07.770091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.252 [2024-07-15 13:47:07.770226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.252 [2024-07-15 13:47:07.770503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:41.252 [2024-07-15 13:47:07.770505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:42.192 [2024-07-15 13:47:08.445807] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:42.192 Malloc0 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:42.192 [2024-07-15 13:47:08.502456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:42.192 test case1: single bdev can't be used in multiple subsystems 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:42.192 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:42.193 [2024-07-15 13:47:08.538387] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:42.193 [2024-07-15 13:47:08.538406] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:42.193 [2024-07-15 13:47:08.538413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.193 request: 00:16:42.193 { 00:16:42.193 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:42.193 "namespace": { 00:16:42.193 "bdev_name": "Malloc0", 00:16:42.193 "no_auto_visible": false 00:16:42.193 }, 00:16:42.193 "method": "nvmf_subsystem_add_ns", 00:16:42.193 "req_id": 1 00:16:42.193 } 00:16:42.193 Got JSON-RPC error response 00:16:42.193 response: 00:16:42.193 { 00:16:42.193 "code": -32602, 00:16:42.193 "message": "Invalid parameters" 00:16:42.193 } 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:42.193 Adding namespace failed - expected result. 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:42.193 test case2: host connect to nvmf target in multiple paths 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:42.193 [2024-07-15 13:47:08.550509] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.193 13:47:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.591 13:47:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:45.096 13:47:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:45.096 13:47:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:16:45.096 13:47:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:45.096 13:47:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:45.096 13:47:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:16:47.638 13:47:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:47.638 13:47:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:47.638 13:47:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:47.638 13:47:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:47.638 13:47:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:47.638 13:47:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:16:47.638 13:47:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:47.638 [global] 00:16:47.638 thread=1 00:16:47.638 invalidate=1 00:16:47.638 rw=write 00:16:47.638 time_based=1 00:16:47.638 runtime=1 00:16:47.638 ioengine=libaio 00:16:47.638 direct=1 00:16:47.638 bs=4096 00:16:47.638 iodepth=1 00:16:47.638 norandommap=0 00:16:47.638 numjobs=1 00:16:47.638 00:16:47.638 verify_dump=1 00:16:47.638 verify_backlog=512 00:16:47.638 verify_state_save=0 00:16:47.638 do_verify=1 00:16:47.638 verify=crc32c-intel 00:16:47.638 [job0] 00:16:47.638 filename=/dev/nvme0n1 00:16:47.638 Could not set queue depth (nvme0n1) 00:16:47.638 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:47.638 fio-3.35 00:16:47.638 Starting 1 thread 00:16:48.579 00:16:48.579 job0: (groupid=0, jobs=1): err= 0: pid=1063859: Mon Jul 15 13:47:15 2024 00:16:48.579 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:48.579 slat (nsec): min=7496, max=77622, avg=26333.72, stdev=3285.16 00:16:48.579 clat (usec): min=727, max=1119, avg=973.13, stdev=43.50 00:16:48.579 lat (usec): min=753, max=1145, avg=999.46, stdev=43.49 00:16:48.579 clat percentiles (usec): 00:16:48.579 | 1.00th=[ 840], 5.00th=[ 906], 10.00th=[ 922], 20.00th=[ 947], 00:16:48.579 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 979], 60.00th=[ 979], 00:16:48.579 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1037], 00:16:48.579 | 99.00th=[ 1074], 99.50th=[ 1090], 99.90th=[ 1123], 99.95th=[ 1123], 00:16:48.579 | 99.99th=[ 1123] 00:16:48.579 write: IOPS=685, BW=2741KiB/s (2807kB/s)(2744KiB/1001msec); 0 zone resets 00:16:48.579 slat (usec): min=9, max=26997, avg=67.87, stdev=1029.74 00:16:48.579 clat (usec): min=363, max=1047, avg=630.55, stdev=87.05 00:16:48.579 lat (usec): min=397, max=27623, avg=698.42, stdev=1033.41 00:16:48.579 clat percentiles (usec): 00:16:48.579 | 1.00th=[ 457], 5.00th=[ 506], 10.00th=[ 515], 20.00th=[ 553], 00:16:48.579 | 30.00th=[ 603], 40.00th=[ 619], 50.00th=[ 635], 60.00th=[ 644], 00:16:48.579 | 70.00th=[ 652], 80.00th=[ 660], 90.00th=[ 717], 95.00th=[ 791], 00:16:48.579 | 99.00th=[ 938], 99.50th=[ 971], 99.90th=[ 1045], 99.95th=[ 1045], 00:16:48.579 | 99.99th=[ 1045] 00:16:48.579 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:48.579 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:48.579 lat (usec) : 500=2.09%, 750=50.58%, 1000=38.15% 00:16:48.579 lat (msec) : 2=9.18% 00:16:48.579 cpu : usr=1.20%, sys=3.90%, ctx=1201, majf=0, minf=1 00:16:48.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.579 issued rwts: total=512,686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.579 00:16:48.579 Run status group 0 (all jobs): 00:16:48.579 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:16:48.579 WRITE: bw=2741KiB/s (2807kB/s), 2741KiB/s-2741KiB/s (2807kB/s-2807kB/s), io=2744KiB (2810kB), run=1001-1001msec 00:16:48.579 00:16:48.579 Disk stats (read/write): 00:16:48.579 nvme0n1: ios=537/525, merge=0/0, ticks=1414/331, in_queue=1745, util=98.80% 00:16:48.579 13:47:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:48.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:48.840 rmmod nvme_tcp 00:16:48.840 rmmod nvme_fabrics 00:16:48.840 rmmod nvme_keyring 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1062320 ']' 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1062320 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1062320 ']' 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1062320 00:16:48.840 13:47:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:16:49.101 13:47:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:49.101 13:47:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1062320 00:16:49.101 13:47:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:49.101 13:47:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:49.101 13:47:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1062320' 00:16:49.101 killing process with pid 1062320 00:16:49.101 13:47:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1062320 00:16:49.101 13:47:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1062320 00:16:49.101 13:47:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:49.101 13:47:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:49.101 13:47:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:49.101 13:47:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:49.101 13:47:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:49.101 13:47:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.101 13:47:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.101 13:47:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.646 13:47:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:51.646 00:16:51.646 real 0m17.360s 00:16:51.646 user 0m49.945s 00:16:51.646 sys 0m6.114s 00:16:51.646 13:47:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:51.646 13:47:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.646 ************************************ 00:16:51.646 END TEST nvmf_nmic 00:16:51.646 ************************************ 00:16:51.646 13:47:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:51.646 13:47:17 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:51.646 13:47:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:51.646 13:47:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:51.646 13:47:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:51.646 ************************************ 00:16:51.646 START TEST nvmf_fio_target 00:16:51.646 ************************************ 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:51.646 * Looking for test storage... 00:16:51.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:51.646 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.647 13:47:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.647 13:47:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.647 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:51.647 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:51.647 13:47:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:51.647 13:47:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:58.255 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.255 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:58.256 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:58.256 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:58.256 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:58.256 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:58.516 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:58.516 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:58.516 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:58.516 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:58.516 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:58.516 13:47:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:58.516 13:47:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:58.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:16:58.516 00:16:58.516 --- 10.0.0.2 ping statistics --- 00:16:58.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.516 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:16:58.516 13:47:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:58.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:16:58.516 00:16:58.516 --- 10.0.0.1 ping statistics --- 00:16:58.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.516 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:16:58.516 13:47:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.516 13:47:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:58.516 13:47:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:58.516 13:47:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.516 13:47:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:58.516 13:47:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:58.516 13:47:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.516 13:47:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:58.516 13:47:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:58.776 13:47:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:58.776 13:47:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:58.776 13:47:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:58.776 13:47:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.776 13:47:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1068190 00:16:58.777 13:47:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1068190 00:16:58.777 13:47:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:58.777 13:47:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1068190 ']' 00:16:58.777 13:47:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.777 13:47:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:58.777 13:47:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.777 13:47:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:58.777 13:47:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.777 [2024-07-15 13:47:25.124384] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:16:58.777 [2024-07-15 13:47:25.124432] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.777 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.777 [2024-07-15 13:47:25.192043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:58.777 [2024-07-15 13:47:25.257401] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.777 [2024-07-15 13:47:25.257439] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.777 [2024-07-15 13:47:25.257446] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.777 [2024-07-15 13:47:25.257452] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.777 [2024-07-15 13:47:25.257458] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.777 [2024-07-15 13:47:25.257633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.777 [2024-07-15 13:47:25.257747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.777 [2024-07-15 13:47:25.257903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.777 [2024-07-15 13:47:25.257903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:59.720 13:47:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:59.720 13:47:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:16:59.720 13:47:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:59.720 13:47:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:59.720 13:47:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.720 13:47:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.720 13:47:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:59.720 [2024-07-15 13:47:26.079256] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.720 13:47:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:59.982 13:47:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:59.982 13:47:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:59.982 13:47:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:59.982 13:47:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:00.243 13:47:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:00.243 13:47:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:00.504 13:47:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:00.504 13:47:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:00.504 13:47:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:00.764 13:47:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:00.764 13:47:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:01.025 13:47:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:01.025 13:47:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:01.025 13:47:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:01.025 13:47:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:01.285 13:47:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:01.546 13:47:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:01.546 13:47:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:01.546 13:47:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:01.546 13:47:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:01.807 13:47:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.067 [2024-07-15 13:47:28.336696] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.067 13:47:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:02.067 13:47:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:02.328 13:47:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:04.239 13:47:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:04.239 13:47:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:04.239 13:47:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:04.239 13:47:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:04.239 13:47:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:04.239 13:47:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:06.215 13:47:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:06.215 13:47:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:06.215 13:47:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:06.215 13:47:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:06.215 13:47:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:06.215 13:47:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:06.215 13:47:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:06.215 [global] 00:17:06.215 thread=1 00:17:06.215 invalidate=1 00:17:06.215 rw=write 00:17:06.215 time_based=1 00:17:06.215 runtime=1 00:17:06.215 ioengine=libaio 00:17:06.215 direct=1 00:17:06.215 bs=4096 00:17:06.215 iodepth=1 00:17:06.215 norandommap=0 00:17:06.215 numjobs=1 00:17:06.215 00:17:06.215 verify_dump=1 00:17:06.215 verify_backlog=512 00:17:06.215 verify_state_save=0 00:17:06.215 do_verify=1 00:17:06.215 verify=crc32c-intel 00:17:06.215 [job0] 00:17:06.215 filename=/dev/nvme0n1 00:17:06.215 [job1] 00:17:06.215 filename=/dev/nvme0n2 00:17:06.215 [job2] 00:17:06.215 filename=/dev/nvme0n3 00:17:06.215 [job3] 00:17:06.215 filename=/dev/nvme0n4 00:17:06.215 Could not set queue depth (nvme0n1) 00:17:06.215 Could not set queue depth (nvme0n2) 00:17:06.215 Could not set queue depth (nvme0n3) 00:17:06.215 Could not set queue depth (nvme0n4) 00:17:06.215 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:06.215 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:06.215 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:06.215 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:06.215 fio-3.35 00:17:06.215 Starting 4 threads 00:17:07.604 00:17:07.604 job0: (groupid=0, jobs=1): err= 0: pid=1070091: Mon Jul 15 13:47:33 2024 00:17:07.604 read: IOPS=473, BW=1894KiB/s (1940kB/s)(1896KiB/1001msec) 00:17:07.604 slat (nsec): min=14920, max=59327, avg=25042.24, stdev=4672.86 00:17:07.604 clat (usec): min=884, max=1415, avg=1175.01, stdev=82.72 00:17:07.604 lat (usec): min=908, max=1434, avg=1200.05, stdev=82.84 00:17:07.604 clat percentiles (usec): 00:17:07.604 | 1.00th=[ 930], 5.00th=[ 1029], 10.00th=[ 1057], 20.00th=[ 1106], 00:17:07.604 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1188], 60.00th=[ 1205], 00:17:07.604 | 70.00th=[ 1221], 80.00th=[ 1237], 90.00th=[ 1270], 95.00th=[ 1287], 00:17:07.604 | 99.00th=[ 1336], 99.50th=[ 1352], 99.90th=[ 1418], 99.95th=[ 1418], 00:17:07.604 | 99.99th=[ 1418] 00:17:07.604 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:07.604 slat (usec): min=9, max=136, avg=29.12, stdev= 9.22 00:17:07.604 clat (usec): min=485, max=1095, avg=798.58, stdev=97.59 00:17:07.604 lat (usec): min=496, max=1126, avg=827.69, stdev=100.76 00:17:07.604 clat percentiles (usec): 00:17:07.604 | 1.00th=[ 537], 5.00th=[ 627], 10.00th=[ 660], 20.00th=[ 725], 00:17:07.604 | 30.00th=[ 758], 40.00th=[ 775], 50.00th=[ 807], 60.00th=[ 832], 00:17:07.604 | 70.00th=[ 857], 80.00th=[ 881], 90.00th=[ 914], 95.00th=[ 947], 00:17:07.604 | 99.00th=[ 996], 99.50th=[ 1012], 99.90th=[ 1090], 99.95th=[ 1090], 00:17:07.604 | 99.99th=[ 1090] 00:17:07.604 bw ( KiB/s): min= 4096, max= 4096, per=41.28%, avg=4096.00, stdev= 0.00, samples=1 00:17:07.604 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:07.604 lat (usec) : 500=0.10%, 750=13.69%, 1000=39.35% 00:17:07.604 lat (msec) : 2=46.86% 00:17:07.604 cpu : usr=1.20%, sys=3.00%, ctx=986, majf=0, minf=1 00:17:07.604 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.604 issued rwts: total=474,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.604 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.604 job1: (groupid=0, jobs=1): err= 0: pid=1070103: Mon Jul 15 13:47:33 2024 00:17:07.604 read: IOPS=471, BW=1886KiB/s (1931kB/s)(1888KiB/1001msec) 00:17:07.604 slat (nsec): min=10002, max=63251, avg=25942.54, stdev=4447.37 00:17:07.604 clat (usec): min=935, max=1486, avg=1219.35, stdev=96.36 00:17:07.604 lat (usec): min=961, max=1511, avg=1245.29, stdev=96.40 00:17:07.604 clat percentiles (usec): 00:17:07.604 | 1.00th=[ 955], 5.00th=[ 1045], 10.00th=[ 1106], 20.00th=[ 1156], 00:17:07.604 | 30.00th=[ 1172], 40.00th=[ 1188], 50.00th=[ 1221], 60.00th=[ 1237], 00:17:07.604 | 70.00th=[ 1270], 80.00th=[ 1303], 90.00th=[ 1336], 95.00th=[ 1385], 00:17:07.604 | 99.00th=[ 1450], 99.50th=[ 1467], 99.90th=[ 1483], 99.95th=[ 1483], 00:17:07.604 | 99.99th=[ 1483] 00:17:07.604 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:07.604 slat (nsec): min=8961, max=51506, avg=29894.99, stdev=8357.55 00:17:07.604 clat (usec): min=273, max=1108, avg=759.53, stdev=138.43 00:17:07.604 lat (usec): min=306, max=1140, avg=789.42, stdev=141.81 00:17:07.604 clat percentiles (usec): 00:17:07.604 | 1.00th=[ 433], 5.00th=[ 486], 10.00th=[ 562], 20.00th=[ 644], 00:17:07.604 | 30.00th=[ 693], 40.00th=[ 742], 50.00th=[ 775], 60.00th=[ 816], 00:17:07.604 | 70.00th=[ 848], 80.00th=[ 881], 90.00th=[ 922], 95.00th=[ 955], 00:17:07.604 | 99.00th=[ 1012], 99.50th=[ 1012], 99.90th=[ 1106], 99.95th=[ 1106], 00:17:07.604 | 99.99th=[ 1106] 00:17:07.604 bw ( KiB/s): min= 4096, max= 4096, per=41.28%, avg=4096.00, stdev= 0.00, samples=1 00:17:07.604 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:07.604 lat (usec) : 500=2.64%, 750=19.11%, 1000=30.89% 00:17:07.604 lat (msec) : 2=47.36% 00:17:07.604 cpu : usr=2.40%, sys=3.40%, ctx=984, majf=0, minf=1 00:17:07.604 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.604 issued rwts: total=472,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.604 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.604 job2: (groupid=0, jobs=1): err= 0: pid=1070108: Mon Jul 15 13:47:33 2024 00:17:07.604 read: IOPS=1006, BW=4028KiB/s (4125kB/s)(4032KiB/1001msec) 00:17:07.604 slat (nsec): min=6549, max=58803, avg=21807.64, stdev=8048.87 00:17:07.604 clat (usec): min=213, max=720, avg=554.99, stdev=69.07 00:17:07.604 lat (usec): min=220, max=741, avg=576.80, stdev=70.46 00:17:07.604 clat percentiles (usec): 00:17:07.604 | 1.00th=[ 351], 5.00th=[ 445], 10.00th=[ 465], 20.00th=[ 494], 00:17:07.604 | 30.00th=[ 523], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 586], 00:17:07.604 | 70.00th=[ 594], 80.00th=[ 611], 90.00th=[ 635], 95.00th=[ 652], 00:17:07.604 | 99.00th=[ 685], 99.50th=[ 709], 99.90th=[ 717], 99.95th=[ 725], 00:17:07.604 | 99.99th=[ 725] 00:17:07.604 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:17:07.604 slat (nsec): min=9357, max=50625, avg=26974.72, stdev=9612.91 00:17:07.604 clat (usec): min=133, max=622, avg=367.55, stdev=79.76 00:17:07.604 lat (usec): min=143, max=654, avg=394.52, stdev=83.08 00:17:07.604 clat percentiles (usec): 00:17:07.604 | 1.00th=[ 167], 5.00th=[ 247], 10.00th=[ 269], 20.00th=[ 289], 00:17:07.604 | 30.00th=[ 318], 40.00th=[ 355], 50.00th=[ 379], 60.00th=[ 396], 00:17:07.604 | 70.00th=[ 420], 80.00th=[ 441], 90.00th=[ 465], 95.00th=[ 486], 00:17:07.604 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 578], 99.95th=[ 627], 00:17:07.604 | 99.99th=[ 627] 00:17:07.604 bw ( KiB/s): min= 4096, max= 4096, per=41.28%, avg=4096.00, stdev= 0.00, samples=1 00:17:07.604 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:07.604 lat (usec) : 250=2.90%, 500=57.09%, 750=40.01% 00:17:07.604 cpu : usr=2.30%, sys=5.70%, ctx=2032, majf=0, minf=1 00:17:07.604 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.604 issued rwts: total=1008,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.604 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.604 job3: (groupid=0, jobs=1): err= 0: pid=1070109: Mon Jul 15 13:47:33 2024 00:17:07.604 read: IOPS=14, BW=58.1KiB/s (59.5kB/s)(60.0KiB/1032msec) 00:17:07.604 slat (nsec): min=9332, max=25024, avg=23798.73, stdev=4004.45 00:17:07.604 clat (usec): min=41578, max=42131, avg=41941.48, stdev=149.78 00:17:07.604 lat (usec): min=41587, max=42156, avg=41965.28, stdev=152.52 00:17:07.604 clat percentiles (usec): 00:17:07.604 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:17:07.604 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:07.604 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:07.604 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:07.604 | 99.99th=[42206] 00:17:07.604 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:17:07.604 slat (nsec): min=9777, max=51333, avg=29280.94, stdev=8098.45 00:17:07.604 clat (usec): min=368, max=1034, avg=749.75, stdev=121.90 00:17:07.605 lat (usec): min=399, max=1066, avg=779.03, stdev=123.75 00:17:07.605 clat percentiles (usec): 00:17:07.605 | 1.00th=[ 461], 5.00th=[ 519], 10.00th=[ 578], 20.00th=[ 644], 00:17:07.605 | 30.00th=[ 693], 40.00th=[ 742], 50.00th=[ 766], 60.00th=[ 791], 00:17:07.605 | 70.00th=[ 816], 80.00th=[ 848], 90.00th=[ 898], 95.00th=[ 938], 00:17:07.605 | 99.00th=[ 996], 99.50th=[ 1012], 99.90th=[ 1037], 99.95th=[ 1037], 00:17:07.605 | 99.99th=[ 1037] 00:17:07.605 bw ( KiB/s): min= 4096, max= 4096, per=41.28%, avg=4096.00, stdev= 0.00, samples=1 00:17:07.605 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:07.605 lat (usec) : 500=2.66%, 750=40.61%, 1000=52.94% 00:17:07.605 lat (msec) : 2=0.95%, 50=2.85% 00:17:07.605 cpu : usr=0.87%, sys=1.26%, ctx=527, majf=0, minf=1 00:17:07.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.605 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.605 00:17:07.605 Run status group 0 (all jobs): 00:17:07.605 READ: bw=7632KiB/s (7815kB/s), 58.1KiB/s-4028KiB/s (59.5kB/s-4125kB/s), io=7876KiB (8065kB), run=1001-1032msec 00:17:07.605 WRITE: bw=9922KiB/s (10.2MB/s), 1984KiB/s-4092KiB/s (2032kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1032msec 00:17:07.605 00:17:07.605 Disk stats (read/write): 00:17:07.605 nvme0n1: ios=385/512, merge=0/0, ticks=456/387, in_queue=843, util=87.88% 00:17:07.605 nvme0n2: ios=379/512, merge=0/0, ticks=449/326, in_queue=775, util=89.06% 00:17:07.605 nvme0n3: ios=719/1024, merge=0/0, ticks=378/374, in_queue=752, util=88.34% 00:17:07.605 nvme0n4: ios=10/512, merge=0/0, ticks=419/368, in_queue=787, util=89.38% 00:17:07.605 13:47:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:07.605 [global] 00:17:07.605 thread=1 00:17:07.605 invalidate=1 00:17:07.605 rw=randwrite 00:17:07.605 time_based=1 00:17:07.605 runtime=1 00:17:07.605 ioengine=libaio 00:17:07.605 direct=1 00:17:07.605 bs=4096 00:17:07.605 iodepth=1 00:17:07.605 norandommap=0 00:17:07.605 numjobs=1 00:17:07.605 00:17:07.605 verify_dump=1 00:17:07.605 verify_backlog=512 00:17:07.605 verify_state_save=0 00:17:07.605 do_verify=1 00:17:07.605 verify=crc32c-intel 00:17:07.605 [job0] 00:17:07.605 filename=/dev/nvme0n1 00:17:07.605 [job1] 00:17:07.605 filename=/dev/nvme0n2 00:17:07.605 [job2] 00:17:07.605 filename=/dev/nvme0n3 00:17:07.605 [job3] 00:17:07.605 filename=/dev/nvme0n4 00:17:07.605 Could not set queue depth (nvme0n1) 00:17:07.605 Could not set queue depth (nvme0n2) 00:17:07.605 Could not set queue depth (nvme0n3) 00:17:07.605 Could not set queue depth (nvme0n4) 00:17:08.174 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:08.174 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:08.174 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:08.174 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:08.174 fio-3.35 00:17:08.174 Starting 4 threads 00:17:09.555 00:17:09.555 job0: (groupid=0, jobs=1): err= 0: pid=1070569: Mon Jul 15 13:47:35 2024 00:17:09.555 read: IOPS=418, BW=1673KiB/s (1713kB/s)(1676KiB/1002msec) 00:17:09.555 slat (nsec): min=25319, max=65858, avg=26282.28, stdev=3570.45 00:17:09.555 clat (usec): min=1059, max=1522, avg=1313.98, stdev=57.33 00:17:09.555 lat (usec): min=1085, max=1548, avg=1340.27, stdev=57.39 00:17:09.555 clat percentiles (usec): 00:17:09.555 | 1.00th=[ 1172], 5.00th=[ 1221], 10.00th=[ 1254], 20.00th=[ 1270], 00:17:09.555 | 30.00th=[ 1287], 40.00th=[ 1303], 50.00th=[ 1319], 60.00th=[ 1319], 00:17:09.555 | 70.00th=[ 1336], 80.00th=[ 1352], 90.00th=[ 1385], 95.00th=[ 1418], 00:17:09.555 | 99.00th=[ 1450], 99.50th=[ 1483], 99.90th=[ 1516], 99.95th=[ 1516], 00:17:09.555 | 99.99th=[ 1516] 00:17:09.555 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:17:09.555 slat (nsec): min=9147, max=65579, avg=30866.45, stdev=7467.91 00:17:09.555 clat (usec): min=436, max=1036, avg=812.18, stdev=93.08 00:17:09.555 lat (usec): min=445, max=1067, avg=843.04, stdev=95.46 00:17:09.555 clat percentiles (usec): 00:17:09.555 | 1.00th=[ 586], 5.00th=[ 652], 10.00th=[ 693], 20.00th=[ 734], 00:17:09.555 | 30.00th=[ 758], 40.00th=[ 799], 50.00th=[ 816], 60.00th=[ 840], 00:17:09.555 | 70.00th=[ 873], 80.00th=[ 898], 90.00th=[ 922], 95.00th=[ 955], 00:17:09.555 | 99.00th=[ 996], 99.50th=[ 1020], 99.90th=[ 1037], 99.95th=[ 1037], 00:17:09.555 | 99.99th=[ 1037] 00:17:09.555 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:09.555 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:09.555 lat (usec) : 500=0.21%, 750=14.50%, 1000=39.74% 00:17:09.555 lat (msec) : 2=45.54% 00:17:09.555 cpu : usr=2.40%, sys=3.20%, ctx=931, majf=0, minf=1 00:17:09.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:09.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.555 issued rwts: total=419,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:09.555 job1: (groupid=0, jobs=1): err= 0: pid=1070581: Mon Jul 15 13:47:35 2024 00:17:09.555 read: IOPS=440, BW=1760KiB/s (1803kB/s)(1764KiB/1002msec) 00:17:09.555 slat (nsec): min=26002, max=50261, avg=26869.91, stdev=2841.34 00:17:09.555 clat (usec): min=926, max=1450, avg=1268.10, stdev=63.07 00:17:09.555 lat (usec): min=953, max=1476, avg=1294.97, stdev=63.00 00:17:09.555 clat percentiles (usec): 00:17:09.555 | 1.00th=[ 1074], 5.00th=[ 1156], 10.00th=[ 1205], 20.00th=[ 1221], 00:17:09.555 | 30.00th=[ 1237], 40.00th=[ 1254], 50.00th=[ 1270], 60.00th=[ 1287], 00:17:09.555 | 70.00th=[ 1303], 80.00th=[ 1319], 90.00th=[ 1336], 95.00th=[ 1369], 00:17:09.555 | 99.00th=[ 1401], 99.50th=[ 1401], 99.90th=[ 1450], 99.95th=[ 1450], 00:17:09.555 | 99.99th=[ 1450] 00:17:09.555 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:17:09.555 slat (nsec): min=9281, max=56508, avg=31234.23, stdev=8670.51 00:17:09.555 clat (usec): min=488, max=1041, avg=792.64, stdev=99.99 00:17:09.555 lat (usec): min=501, max=1073, avg=823.88, stdev=103.72 00:17:09.555 clat percentiles (usec): 00:17:09.555 | 1.00th=[ 529], 5.00th=[ 603], 10.00th=[ 644], 20.00th=[ 717], 00:17:09.555 | 30.00th=[ 742], 40.00th=[ 775], 50.00th=[ 807], 60.00th=[ 832], 00:17:09.555 | 70.00th=[ 857], 80.00th=[ 873], 90.00th=[ 914], 95.00th=[ 938], 00:17:09.555 | 99.00th=[ 988], 99.50th=[ 1012], 99.90th=[ 1045], 99.95th=[ 1045], 00:17:09.555 | 99.99th=[ 1045] 00:17:09.555 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:09.555 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:09.555 lat (usec) : 500=0.21%, 750=17.00%, 1000=36.10% 00:17:09.555 lat (msec) : 2=46.69% 00:17:09.555 cpu : usr=2.20%, sys=3.60%, ctx=954, majf=0, minf=1 00:17:09.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:09.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.555 issued rwts: total=441,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:09.555 job2: (groupid=0, jobs=1): err= 0: pid=1070599: Mon Jul 15 13:47:35 2024 00:17:09.555 read: IOPS=16, BW=65.6KiB/s (67.2kB/s)(68.0KiB/1036msec) 00:17:09.555 slat (nsec): min=25072, max=25626, avg=25308.59, stdev=158.26 00:17:09.555 clat (usec): min=1179, max=42279, avg=39581.04, stdev=9896.11 00:17:09.555 lat (usec): min=1205, max=42304, avg=39606.34, stdev=9896.14 00:17:09.555 clat percentiles (usec): 00:17:09.555 | 1.00th=[ 1188], 5.00th=[ 1188], 10.00th=[41681], 20.00th=[41681], 00:17:09.555 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:09.555 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:09.555 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:09.555 | 99.99th=[42206] 00:17:09.555 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:17:09.555 slat (nsec): min=10081, max=63576, avg=31768.74, stdev=6193.56 00:17:09.555 clat (usec): min=314, max=961, avg=666.30, stdev=137.31 00:17:09.555 lat (usec): min=325, max=1010, avg=698.07, stdev=138.80 00:17:09.555 clat percentiles (usec): 00:17:09.555 | 1.00th=[ 367], 5.00th=[ 437], 10.00th=[ 469], 20.00th=[ 537], 00:17:09.555 | 30.00th=[ 586], 40.00th=[ 627], 50.00th=[ 676], 60.00th=[ 717], 00:17:09.555 | 70.00th=[ 758], 80.00th=[ 791], 90.00th=[ 840], 95.00th=[ 873], 00:17:09.555 | 99.00th=[ 914], 99.50th=[ 947], 99.90th=[ 963], 99.95th=[ 963], 00:17:09.555 | 99.99th=[ 963] 00:17:09.555 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:09.555 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:09.555 lat (usec) : 500=13.80%, 750=51.23%, 1000=31.76% 00:17:09.555 lat (msec) : 2=0.19%, 50=3.02% 00:17:09.555 cpu : usr=0.68%, sys=1.74%, ctx=530, majf=0, minf=1 00:17:09.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:09.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.555 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:09.556 job3: (groupid=0, jobs=1): err= 0: pid=1070606: Mon Jul 15 13:47:35 2024 00:17:09.556 read: IOPS=15, BW=63.1KiB/s (64.6kB/s)(64.0KiB/1014msec) 00:17:09.556 slat (nsec): min=25730, max=26314, avg=26016.87, stdev=161.11 00:17:09.556 clat (usec): min=41267, max=42040, avg=41921.33, stdev=179.98 00:17:09.556 lat (usec): min=41293, max=42066, avg=41947.34, stdev=179.99 00:17:09.556 clat percentiles (usec): 00:17:09.556 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:17:09.556 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:09.556 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:09.556 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:09.556 | 99.99th=[42206] 00:17:09.556 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:17:09.556 slat (nsec): min=9769, max=54535, avg=27496.35, stdev=9665.14 00:17:09.556 clat (usec): min=349, max=1376, avg=633.15, stdev=119.02 00:17:09.556 lat (usec): min=382, max=1409, avg=660.65, stdev=121.60 00:17:09.556 clat percentiles (usec): 00:17:09.556 | 1.00th=[ 388], 5.00th=[ 482], 10.00th=[ 502], 20.00th=[ 529], 00:17:09.556 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 627], 60.00th=[ 644], 00:17:09.556 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 799], 95.00th=[ 865], 00:17:09.556 | 99.00th=[ 963], 99.50th=[ 996], 99.90th=[ 1385], 99.95th=[ 1385], 00:17:09.556 | 99.99th=[ 1385] 00:17:09.556 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:09.556 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:09.556 lat (usec) : 500=8.90%, 750=75.57%, 1000=12.12% 00:17:09.556 lat (msec) : 2=0.38%, 50=3.03% 00:17:09.556 cpu : usr=0.89%, sys=1.28%, ctx=529, majf=0, minf=1 00:17:09.556 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:09.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.556 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.556 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:09.556 00:17:09.556 Run status group 0 (all jobs): 00:17:09.556 READ: bw=3448KiB/s (3531kB/s), 63.1KiB/s-1760KiB/s (64.6kB/s-1803kB/s), io=3572KiB (3658kB), run=1002-1036msec 00:17:09.556 WRITE: bw=7907KiB/s (8097kB/s), 1977KiB/s-2044KiB/s (2024kB/s-2093kB/s), io=8192KiB (8389kB), run=1002-1036msec 00:17:09.556 00:17:09.556 Disk stats (read/write): 00:17:09.556 nvme0n1: ios=347/512, merge=0/0, ticks=424/312, in_queue=736, util=88.28% 00:17:09.556 nvme0n2: ios=366/512, merge=0/0, ticks=890/314, in_queue=1204, util=97.04% 00:17:09.556 nvme0n3: ios=51/512, merge=0/0, ticks=916/316, in_queue=1232, util=96.63% 00:17:09.556 nvme0n4: ios=49/512, merge=0/0, ticks=1344/312, in_queue=1656, util=99.79% 00:17:09.556 13:47:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:09.556 [global] 00:17:09.556 thread=1 00:17:09.556 invalidate=1 00:17:09.556 rw=write 00:17:09.556 time_based=1 00:17:09.556 runtime=1 00:17:09.556 ioengine=libaio 00:17:09.556 direct=1 00:17:09.556 bs=4096 00:17:09.556 iodepth=128 00:17:09.556 norandommap=0 00:17:09.556 numjobs=1 00:17:09.556 00:17:09.556 verify_dump=1 00:17:09.556 verify_backlog=512 00:17:09.556 verify_state_save=0 00:17:09.556 do_verify=1 00:17:09.556 verify=crc32c-intel 00:17:09.556 [job0] 00:17:09.556 filename=/dev/nvme0n1 00:17:09.556 [job1] 00:17:09.556 filename=/dev/nvme0n2 00:17:09.556 [job2] 00:17:09.556 filename=/dev/nvme0n3 00:17:09.556 [job3] 00:17:09.556 filename=/dev/nvme0n4 00:17:09.556 Could not set queue depth (nvme0n1) 00:17:09.556 Could not set queue depth (nvme0n2) 00:17:09.556 Could not set queue depth (nvme0n3) 00:17:09.556 Could not set queue depth (nvme0n4) 00:17:09.556 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:09.556 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:09.556 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:09.556 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:09.556 fio-3.35 00:17:09.556 Starting 4 threads 00:17:10.941 00:17:10.941 job0: (groupid=0, jobs=1): err= 0: pid=1071058: Mon Jul 15 13:47:37 2024 00:17:10.941 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:17:10.941 slat (nsec): min=892, max=16586k, avg=78699.41, stdev=653291.46 00:17:10.941 clat (usec): min=2106, max=50161, avg=12192.28, stdev=5734.38 00:17:10.941 lat (usec): min=2131, max=50167, avg=12270.98, stdev=5757.80 00:17:10.941 clat percentiles (usec): 00:17:10.941 | 1.00th=[ 2704], 5.00th=[ 4883], 10.00th=[ 5735], 20.00th=[ 7308], 00:17:10.941 | 30.00th=[ 8848], 40.00th=[10159], 50.00th=[10945], 60.00th=[12649], 00:17:10.941 | 70.00th=[14746], 80.00th=[16581], 90.00th=[18482], 95.00th=[20841], 00:17:10.941 | 99.00th=[29230], 99.50th=[46400], 99.90th=[46924], 99.95th=[46924], 00:17:10.941 | 99.99th=[50070] 00:17:10.941 write: IOPS=5491, BW=21.5MiB/s (22.5MB/s)(21.6MiB/1007msec); 0 zone resets 00:17:10.941 slat (nsec): min=1582, max=45825k, avg=80834.02, stdev=914057.38 00:17:10.941 clat (usec): min=863, max=58265, avg=11328.27, stdev=9141.82 00:17:10.941 lat (usec): min=871, max=58277, avg=11409.11, stdev=9198.38 00:17:10.941 clat percentiles (usec): 00:17:10.941 | 1.00th=[ 2114], 5.00th=[ 4293], 10.00th=[ 4817], 20.00th=[ 6128], 00:17:10.941 | 30.00th=[ 7046], 40.00th=[ 8094], 50.00th=[ 8848], 60.00th=[ 9765], 00:17:10.941 | 70.00th=[11338], 80.00th=[13435], 90.00th=[17695], 95.00th=[30540], 00:17:10.941 | 99.00th=[53740], 99.50th=[56361], 99.90th=[57410], 99.95th=[57410], 00:17:10.941 | 99.99th=[58459] 00:17:10.941 bw ( KiB/s): min=20480, max=22736, per=23.01%, avg=21608.00, stdev=1595.23, samples=2 00:17:10.941 iops : min= 5120, max= 5684, avg=5402.00, stdev=398.81, samples=2 00:17:10.941 lat (usec) : 1000=0.03% 00:17:10.941 lat (msec) : 2=0.29%, 4=3.64%, 10=44.23%, 20=44.22%, 50=6.99% 00:17:10.941 lat (msec) : 100=0.60% 00:17:10.941 cpu : usr=4.37%, sys=5.27%, ctx=434, majf=0, minf=1 00:17:10.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:10.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:10.941 issued rwts: total=5120,5530,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:10.941 job1: (groupid=0, jobs=1): err= 0: pid=1071071: Mon Jul 15 13:47:37 2024 00:17:10.941 read: IOPS=6445, BW=25.2MiB/s (26.4MB/s)(25.4MiB/1007msec) 00:17:10.941 slat (nsec): min=913, max=11732k, avg=74210.98, stdev=525855.17 00:17:10.941 clat (usec): min=2300, max=24709, avg=9945.53, stdev=3606.57 00:17:10.941 lat (usec): min=3240, max=24716, avg=10019.74, stdev=3628.85 00:17:10.941 clat percentiles (usec): 00:17:10.941 | 1.00th=[ 4686], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 6718], 00:17:10.941 | 30.00th=[ 7701], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[10159], 00:17:10.941 | 70.00th=[11338], 80.00th=[12518], 90.00th=[15139], 95.00th=[16909], 00:17:10.941 | 99.00th=[21627], 99.50th=[23987], 99.90th=[24249], 99.95th=[24773], 00:17:10.941 | 99.99th=[24773] 00:17:10.941 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:17:10.941 slat (nsec): min=1537, max=12089k, avg=72624.07, stdev=477612.34 00:17:10.941 clat (usec): min=1151, max=47704, avg=9496.41, stdev=6532.01 00:17:10.941 lat (usec): min=1163, max=47709, avg=9569.03, stdev=6570.87 00:17:10.941 clat percentiles (usec): 00:17:10.941 | 1.00th=[ 3294], 5.00th=[ 4621], 10.00th=[ 5211], 20.00th=[ 5932], 00:17:10.941 | 30.00th=[ 6390], 40.00th=[ 7046], 50.00th=[ 7570], 60.00th=[ 8979], 00:17:10.941 | 70.00th=[10159], 80.00th=[11469], 90.00th=[13698], 95.00th=[17171], 00:17:10.941 | 99.00th=[43779], 99.50th=[45876], 99.90th=[47449], 99.95th=[47449], 00:17:10.941 | 99.99th=[47449] 00:17:10.941 bw ( KiB/s): min=24144, max=29104, per=28.35%, avg=26624.00, stdev=3507.25, samples=2 00:17:10.941 iops : min= 6036, max= 7276, avg=6656.00, stdev=876.81, samples=2 00:17:10.941 lat (msec) : 2=0.18%, 4=1.10%, 10=62.36%, 20=33.41%, 50=2.94% 00:17:10.941 cpu : usr=5.07%, sys=6.66%, ctx=464, majf=0, minf=1 00:17:10.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:17:10.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:10.941 issued rwts: total=6491,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:10.941 job2: (groupid=0, jobs=1): err= 0: pid=1071088: Mon Jul 15 13:47:37 2024 00:17:10.941 read: IOPS=4331, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1004msec) 00:17:10.941 slat (nsec): min=880, max=18224k, avg=105510.21, stdev=852657.16 00:17:10.941 clat (usec): min=1878, max=53762, avg=14059.42, stdev=5881.29 00:17:10.941 lat (usec): min=6313, max=64617, avg=14164.93, stdev=5946.02 00:17:10.941 clat percentiles (usec): 00:17:10.941 | 1.00th=[ 6587], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[10945], 00:17:10.941 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12256], 60.00th=[12780], 00:17:10.941 | 70.00th=[13435], 80.00th=[15139], 90.00th=[22414], 95.00th=[26870], 00:17:10.941 | 99.00th=[35390], 99.50th=[42730], 99.90th=[50594], 99.95th=[50594], 00:17:10.941 | 99.99th=[53740] 00:17:10.941 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:17:10.941 slat (nsec): min=1531, max=11823k, avg=112818.99, stdev=685153.53 00:17:10.941 clat (usec): min=1258, max=69870, avg=14375.50, stdev=11116.49 00:17:10.941 lat (usec): min=1270, max=70669, avg=14488.32, stdev=11193.71 00:17:10.941 clat percentiles (usec): 00:17:10.941 | 1.00th=[ 4686], 5.00th=[ 6456], 10.00th=[ 7635], 20.00th=[ 8717], 00:17:10.941 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11338], 60.00th=[12125], 00:17:10.941 | 70.00th=[12911], 80.00th=[14746], 90.00th=[19268], 95.00th=[45876], 00:17:10.941 | 99.00th=[64750], 99.50th=[65274], 99.90th=[69731], 99.95th=[69731], 00:17:10.941 | 99.99th=[69731] 00:17:10.941 bw ( KiB/s): min=16384, max=20480, per=19.63%, avg=18432.00, stdev=2896.31, samples=2 00:17:10.941 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:17:10.941 lat (msec) : 2=0.06%, 4=0.36%, 10=19.69%, 20=69.15%, 50=9.00% 00:17:10.941 lat (msec) : 100=1.74% 00:17:10.941 cpu : usr=3.19%, sys=3.79%, ctx=302, majf=0, minf=1 00:17:10.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:10.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:10.941 issued rwts: total=4349,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:10.941 job3: (groupid=0, jobs=1): err= 0: pid=1071095: Mon Jul 15 13:47:37 2024 00:17:10.941 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:17:10.941 slat (nsec): min=969, max=11648k, avg=76990.69, stdev=555301.54 00:17:10.941 clat (usec): min=4891, max=33347, avg=10216.75, stdev=3873.01 00:17:10.941 lat (usec): min=4896, max=33378, avg=10293.74, stdev=3904.86 00:17:10.941 clat percentiles (usec): 00:17:10.941 | 1.00th=[ 5342], 5.00th=[ 6783], 10.00th=[ 6849], 20.00th=[ 7570], 00:17:10.941 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9634], 00:17:10.941 | 70.00th=[10814], 80.00th=[12125], 90.00th=[15270], 95.00th=[18744], 00:17:10.941 | 99.00th=[26870], 99.50th=[26870], 99.90th=[30278], 99.95th=[30278], 00:17:10.941 | 99.99th=[33424] 00:17:10.941 write: IOPS=6827, BW=26.7MiB/s (28.0MB/s)(26.8MiB/1003msec); 0 zone resets 00:17:10.941 slat (nsec): min=1674, max=7839.2k, avg=65215.37, stdev=413945.16 00:17:10.941 clat (usec): min=1212, max=22475, avg=8586.87, stdev=2682.10 00:17:10.941 lat (usec): min=1223, max=22477, avg=8652.08, stdev=2689.72 00:17:10.941 clat percentiles (usec): 00:17:10.941 | 1.00th=[ 3884], 5.00th=[ 4883], 10.00th=[ 5669], 20.00th=[ 6456], 00:17:10.941 | 30.00th=[ 7111], 40.00th=[ 7570], 50.00th=[ 8291], 60.00th=[ 8979], 00:17:10.941 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[11731], 95.00th=[14091], 00:17:10.941 | 99.00th=[17695], 99.50th=[18220], 99.90th=[20841], 99.95th=[22152], 00:17:10.941 | 99.99th=[22414] 00:17:10.941 bw ( KiB/s): min=24576, max=29408, per=28.74%, avg=26992.00, stdev=3416.74, samples=2 00:17:10.941 iops : min= 6144, max= 7352, avg=6748.00, stdev=854.18, samples=2 00:17:10.941 lat (msec) : 2=0.02%, 4=0.86%, 10=69.91%, 20=27.29%, 50=1.93% 00:17:10.941 cpu : usr=5.99%, sys=6.79%, ctx=466, majf=0, minf=1 00:17:10.941 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:17:10.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:10.941 issued rwts: total=6656,6848,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.941 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:10.941 00:17:10.941 Run status group 0 (all jobs): 00:17:10.941 READ: bw=87.7MiB/s (92.0MB/s), 16.9MiB/s-25.9MiB/s (17.7MB/s-27.2MB/s), io=88.3MiB (92.6MB), run=1003-1007msec 00:17:10.941 WRITE: bw=91.7MiB/s (96.2MB/s), 17.9MiB/s-26.7MiB/s (18.8MB/s-28.0MB/s), io=92.4MiB (96.8MB), run=1003-1007msec 00:17:10.941 00:17:10.941 Disk stats (read/write): 00:17:10.941 nvme0n1: ios=4126/4279, merge=0/0, ticks=47270/34543, in_queue=81813, util=99.20% 00:17:10.941 nvme0n2: ios=5667/5983, merge=0/0, ticks=54105/47912, in_queue=102017, util=92.04% 00:17:10.941 nvme0n3: ios=3584/3727, merge=0/0, ticks=32134/34703, in_queue=66837, util=88.50% 00:17:10.941 nvme0n4: ios=5427/5632, merge=0/0, ticks=54842/46885, in_queue=101727, util=100.00% 00:17:10.941 13:47:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:10.941 [global] 00:17:10.941 thread=1 00:17:10.941 invalidate=1 00:17:10.941 rw=randwrite 00:17:10.941 time_based=1 00:17:10.941 runtime=1 00:17:10.941 ioengine=libaio 00:17:10.941 direct=1 00:17:10.941 bs=4096 00:17:10.941 iodepth=128 00:17:10.941 norandommap=0 00:17:10.941 numjobs=1 00:17:10.941 00:17:10.941 verify_dump=1 00:17:10.941 verify_backlog=512 00:17:10.941 verify_state_save=0 00:17:10.941 do_verify=1 00:17:10.941 verify=crc32c-intel 00:17:10.941 [job0] 00:17:10.941 filename=/dev/nvme0n1 00:17:10.941 [job1] 00:17:10.941 filename=/dev/nvme0n2 00:17:10.941 [job2] 00:17:10.941 filename=/dev/nvme0n3 00:17:10.941 [job3] 00:17:10.941 filename=/dev/nvme0n4 00:17:10.941 Could not set queue depth (nvme0n1) 00:17:10.941 Could not set queue depth (nvme0n2) 00:17:10.941 Could not set queue depth (nvme0n3) 00:17:10.941 Could not set queue depth (nvme0n4) 00:17:11.200 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:11.200 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:11.200 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:11.200 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:11.200 fio-3.35 00:17:11.200 Starting 4 threads 00:17:12.583 00:17:12.583 job0: (groupid=0, jobs=1): err= 0: pid=1071560: Mon Jul 15 13:47:38 2024 00:17:12.583 read: IOPS=3723, BW=14.5MiB/s (15.3MB/s)(14.6MiB/1006msec) 00:17:12.583 slat (nsec): min=867, max=30327k, avg=131100.44, stdev=1068005.11 00:17:12.583 clat (usec): min=1478, max=82212, avg=17181.00, stdev=12737.16 00:17:12.583 lat (usec): min=3674, max=82236, avg=17312.10, stdev=12821.74 00:17:12.583 clat percentiles (usec): 00:17:12.583 | 1.00th=[ 5866], 5.00th=[ 6456], 10.00th=[ 7308], 20.00th=[ 8848], 00:17:12.583 | 30.00th=[ 9765], 40.00th=[11207], 50.00th=[12387], 60.00th=[14222], 00:17:12.583 | 70.00th=[17171], 80.00th=[22938], 90.00th=[35390], 95.00th=[43254], 00:17:12.583 | 99.00th=[71828], 99.50th=[71828], 99.90th=[71828], 99.95th=[77071], 00:17:12.583 | 99.99th=[82314] 00:17:12.583 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:17:12.583 slat (nsec): min=1524, max=18163k, avg=120318.89, stdev=829667.43 00:17:12.583 clat (usec): min=3397, max=55661, avg=15365.44, stdev=10425.75 00:17:12.583 lat (usec): min=3406, max=55695, avg=15485.76, stdev=10495.37 00:17:12.583 clat percentiles (usec): 00:17:12.583 | 1.00th=[ 4621], 5.00th=[ 5735], 10.00th=[ 6390], 20.00th=[ 7767], 00:17:12.583 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[11469], 60.00th=[12518], 00:17:12.583 | 70.00th=[18482], 80.00th=[22414], 90.00th=[29492], 95.00th=[38536], 00:17:12.583 | 99.00th=[51643], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:17:12.583 | 99.99th=[55837] 00:17:12.583 bw ( KiB/s): min=16384, max=16384, per=19.34%, avg=16384.00, stdev= 0.00, samples=2 00:17:12.583 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:17:12.583 lat (msec) : 2=0.01%, 4=0.20%, 10=37.12%, 20=39.94%, 50=19.94% 00:17:12.583 lat (msec) : 100=2.78% 00:17:12.583 cpu : usr=1.89%, sys=3.88%, ctx=329, majf=0, minf=2 00:17:12.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:12.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:12.583 issued rwts: total=3746,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.583 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:12.583 job1: (groupid=0, jobs=1): err= 0: pid=1071592: Mon Jul 15 13:47:38 2024 00:17:12.583 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:17:12.583 slat (nsec): min=886, max=12275k, avg=81807.66, stdev=517371.83 00:17:12.583 clat (usec): min=3683, max=37156, avg=10877.93, stdev=5129.59 00:17:12.583 lat (usec): min=5428, max=37180, avg=10959.74, stdev=5151.77 00:17:12.583 clat percentiles (usec): 00:17:12.583 | 1.00th=[ 5800], 5.00th=[ 6587], 10.00th=[ 7111], 20.00th=[ 7832], 00:17:12.583 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9896], 00:17:12.583 | 70.00th=[10945], 80.00th=[11731], 90.00th=[16581], 95.00th=[23200], 00:17:12.583 | 99.00th=[31327], 99.50th=[33424], 99.90th=[33817], 99.95th=[33817], 00:17:12.583 | 99.99th=[36963] 00:17:12.583 write: IOPS=5931, BW=23.2MiB/s (24.3MB/s)(23.2MiB/1002msec); 0 zone resets 00:17:12.583 slat (nsec): min=1500, max=14291k, avg=87416.31, stdev=595989.05 00:17:12.583 clat (usec): min=594, max=62207, avg=10840.54, stdev=7778.38 00:17:12.583 lat (usec): min=2609, max=62232, avg=10927.96, stdev=7838.13 00:17:12.583 clat percentiles (usec): 00:17:12.583 | 1.00th=[ 4948], 5.00th=[ 6390], 10.00th=[ 6718], 20.00th=[ 7046], 00:17:12.583 | 30.00th=[ 7373], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:17:12.583 | 70.00th=[ 9372], 80.00th=[11469], 90.00th=[16581], 95.00th=[29230], 00:17:12.583 | 99.00th=[43779], 99.50th=[54789], 99.90th=[62129], 99.95th=[62129], 00:17:12.583 | 99.99th=[62129] 00:17:12.583 bw ( KiB/s): min=19448, max=27080, per=27.47%, avg=23264.00, stdev=5396.64, samples=2 00:17:12.583 iops : min= 4862, max= 6770, avg=5816.00, stdev=1349.16, samples=2 00:17:12.583 lat (usec) : 750=0.01% 00:17:12.583 lat (msec) : 4=0.29%, 10=67.90%, 20=23.44%, 50=7.91%, 100=0.46% 00:17:12.583 cpu : usr=2.60%, sys=4.10%, ctx=556, majf=0, minf=1 00:17:12.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:12.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:12.583 issued rwts: total=5632,5943,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.583 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:12.583 job2: (groupid=0, jobs=1): err= 0: pid=1071627: Mon Jul 15 13:47:38 2024 00:17:12.583 read: IOPS=5559, BW=21.7MiB/s (22.8MB/s)(21.8MiB/1006msec) 00:17:12.583 slat (nsec): min=1026, max=12009k, avg=85220.75, stdev=633238.55 00:17:12.583 clat (usec): min=1842, max=43399, avg=11453.65, stdev=4677.07 00:17:12.583 lat (usec): min=3704, max=47512, avg=11538.87, stdev=4708.62 00:17:12.583 clat percentiles (usec): 00:17:12.583 | 1.00th=[ 4883], 5.00th=[ 5866], 10.00th=[ 6718], 20.00th=[ 7767], 00:17:12.583 | 30.00th=[ 8586], 40.00th=[ 9634], 50.00th=[10421], 60.00th=[11338], 00:17:12.583 | 70.00th=[12649], 80.00th=[14877], 90.00th=[17695], 95.00th=[20055], 00:17:12.583 | 99.00th=[24773], 99.50th=[27132], 99.90th=[43254], 99.95th=[43254], 00:17:12.583 | 99.99th=[43254] 00:17:12.583 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:17:12.583 slat (nsec): min=1593, max=8364.9k, avg=80485.86, stdev=501557.65 00:17:12.583 clat (usec): min=2074, max=56593, avg=11219.45, stdev=8418.37 00:17:12.583 lat (usec): min=2765, max=56596, avg=11299.94, stdev=8473.81 00:17:12.583 clat percentiles (usec): 00:17:12.583 | 1.00th=[ 3556], 5.00th=[ 4752], 10.00th=[ 5604], 20.00th=[ 6652], 00:17:12.583 | 30.00th=[ 7504], 40.00th=[ 8160], 50.00th=[ 9110], 60.00th=[ 9765], 00:17:12.583 | 70.00th=[10683], 80.00th=[11600], 90.00th=[17171], 95.00th=[30802], 00:17:12.583 | 99.00th=[51643], 99.50th=[55313], 99.90th=[56361], 99.95th=[56361], 00:17:12.583 | 99.99th=[56361] 00:17:12.583 bw ( KiB/s): min=19376, max=25680, per=26.60%, avg=22528.00, stdev=4457.60, samples=2 00:17:12.583 iops : min= 4844, max= 6420, avg=5632.00, stdev=1114.40, samples=2 00:17:12.583 lat (msec) : 2=0.01%, 4=0.63%, 10=53.18%, 20=39.00%, 50=6.63% 00:17:12.583 lat (msec) : 100=0.55% 00:17:12.583 cpu : usr=3.88%, sys=6.07%, ctx=397, majf=0, minf=1 00:17:12.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:12.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:12.583 issued rwts: total=5593,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.583 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:12.583 job3: (groupid=0, jobs=1): err= 0: pid=1071640: Mon Jul 15 13:47:38 2024 00:17:12.583 read: IOPS=5589, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:17:12.583 slat (nsec): min=932, max=11819k, avg=89385.31, stdev=583223.90 00:17:12.583 clat (usec): min=1367, max=52583, avg=11830.15, stdev=4928.88 00:17:12.583 lat (usec): min=1812, max=52585, avg=11919.53, stdev=4963.73 00:17:12.583 clat percentiles (usec): 00:17:12.583 | 1.00th=[ 2704], 5.00th=[ 6325], 10.00th=[ 8356], 20.00th=[ 8848], 00:17:12.583 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[11207], 00:17:12.583 | 70.00th=[12256], 80.00th=[14353], 90.00th=[19268], 95.00th=[21890], 00:17:12.583 | 99.00th=[24249], 99.50th=[35914], 99.90th=[47973], 99.95th=[47973], 00:17:12.583 | 99.99th=[52691] 00:17:12.583 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:17:12.583 slat (nsec): min=1539, max=8500.3k, avg=74692.16, stdev=451838.56 00:17:12.583 clat (usec): min=655, max=36197, avg=10835.25, stdev=5663.54 00:17:12.583 lat (usec): min=664, max=36206, avg=10909.94, stdev=5691.39 00:17:12.583 clat percentiles (usec): 00:17:12.583 | 1.00th=[ 1942], 5.00th=[ 4080], 10.00th=[ 6652], 20.00th=[ 7504], 00:17:12.583 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[10159], 00:17:12.583 | 70.00th=[11076], 80.00th=[12125], 90.00th=[17957], 95.00th=[22414], 00:17:12.583 | 99.00th=[32637], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:17:12.583 | 99.99th=[36439] 00:17:12.583 bw ( KiB/s): min=17800, max=27256, per=26.60%, avg=22528.00, stdev=6686.40, samples=2 00:17:12.583 iops : min= 4450, max= 6814, avg=5632.00, stdev=1671.60, samples=2 00:17:12.583 lat (usec) : 750=0.01%, 1000=0.01% 00:17:12.583 lat (msec) : 2=0.83%, 4=2.69%, 10=47.64%, 20=41.89%, 50=6.91% 00:17:12.583 lat (msec) : 100=0.02% 00:17:12.583 cpu : usr=3.88%, sys=4.08%, ctx=584, majf=0, minf=1 00:17:12.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:12.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:12.583 issued rwts: total=5623,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:12.584 00:17:12.584 Run status group 0 (all jobs): 00:17:12.584 READ: bw=80.0MiB/s (83.8MB/s), 14.5MiB/s-22.0MiB/s (15.3MB/s-23.0MB/s), io=80.4MiB (84.4MB), run=1002-1006msec 00:17:12.584 WRITE: bw=82.7MiB/s (86.7MB/s), 15.9MiB/s-23.2MiB/s (16.7MB/s-24.3MB/s), io=83.2MiB (87.3MB), run=1002-1006msec 00:17:12.584 00:17:12.584 Disk stats (read/write): 00:17:12.584 nvme0n1: ios=3102/3095, merge=0/0, ticks=22317/16687, in_queue=39004, util=97.49% 00:17:12.584 nvme0n2: ios=4146/4207, merge=0/0, ticks=15951/16473, in_queue=32424, util=96.68% 00:17:12.584 nvme0n3: ios=3616/3750, merge=0/0, ticks=42614/44487, in_queue=87101, util=99.89% 00:17:12.584 nvme0n4: ios=4641/4971, merge=0/0, ticks=20905/21907, in_queue=42812, util=97.47% 00:17:12.584 13:47:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:12.584 13:47:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1071712 00:17:12.584 13:47:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:12.584 13:47:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:12.584 [global] 00:17:12.584 thread=1 00:17:12.584 invalidate=1 00:17:12.584 rw=read 00:17:12.584 time_based=1 00:17:12.584 runtime=10 00:17:12.584 ioengine=libaio 00:17:12.584 direct=1 00:17:12.584 bs=4096 00:17:12.584 iodepth=1 00:17:12.584 norandommap=1 00:17:12.584 numjobs=1 00:17:12.584 00:17:12.584 [job0] 00:17:12.584 filename=/dev/nvme0n1 00:17:12.584 [job1] 00:17:12.584 filename=/dev/nvme0n2 00:17:12.584 [job2] 00:17:12.584 filename=/dev/nvme0n3 00:17:12.584 [job3] 00:17:12.584 filename=/dev/nvme0n4 00:17:12.886 Could not set queue depth (nvme0n1) 00:17:12.886 Could not set queue depth (nvme0n2) 00:17:12.886 Could not set queue depth (nvme0n3) 00:17:12.886 Could not set queue depth (nvme0n4) 00:17:13.145 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.145 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.145 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.145 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.145 fio-3.35 00:17:13.145 Starting 4 threads 00:17:15.689 13:47:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:15.689 13:47:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:15.689 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=258048, buflen=4096 00:17:15.689 fio: pid=1072086, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:15.949 13:47:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:15.949 13:47:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:15.949 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=270336, buflen=4096 00:17:15.949 fio: pid=1072079, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:16.210 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=282624, buflen=4096 00:17:16.210 fio: pid=1072056, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:16.210 13:47:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:16.210 13:47:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:16.210 13:47:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:16.210 13:47:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:16.210 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=307200, buflen=4096 00:17:16.210 fio: pid=1072059, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:16.471 00:17:16.471 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1072056: Mon Jul 15 13:47:42 2024 00:17:16.471 read: IOPS=24, BW=95.2KiB/s (97.5kB/s)(276KiB/2900msec) 00:17:16.471 slat (usec): min=26, max=20561, avg=327.09, stdev=2454.13 00:17:16.471 clat (usec): min=1492, max=42501, avg=41371.44, stdev=4872.84 00:17:16.471 lat (usec): min=1526, max=62018, avg=41702.85, stdev=5468.07 00:17:16.471 clat percentiles (usec): 00:17:16.471 | 1.00th=[ 1500], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:17:16.471 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:16.471 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:16.471 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:16.471 | 99.99th=[42730] 00:17:16.471 bw ( KiB/s): min= 96, max= 96, per=27.11%, avg=96.00, stdev= 0.00, samples=5 00:17:16.471 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:17:16.471 lat (msec) : 2=1.43%, 50=97.14% 00:17:16.471 cpu : usr=0.14%, sys=0.00%, ctx=73, majf=0, minf=1 00:17:16.471 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.472 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.472 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.472 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.472 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1072059: Mon Jul 15 13:47:42 2024 00:17:16.472 read: IOPS=24, BW=97.3KiB/s (99.6kB/s)(300KiB/3084msec) 00:17:16.472 slat (usec): min=19, max=215, avg=32.44, stdev=37.61 00:17:16.472 clat (usec): min=897, max=42107, avg=40792.03, stdev=6641.26 00:17:16.472 lat (usec): min=964, max=42132, avg=40824.59, stdev=6638.56 00:17:16.472 clat percentiles (usec): 00:17:16.472 | 1.00th=[ 898], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:17:16.472 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:16.472 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:16.472 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:16.472 | 99.99th=[42206] 00:17:16.472 bw ( KiB/s): min= 96, max= 96, per=27.11%, avg=96.00, stdev= 0.00, samples=5 00:17:16.472 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:17:16.472 lat (usec) : 1000=1.32% 00:17:16.472 lat (msec) : 2=1.32%, 50=96.05% 00:17:16.472 cpu : usr=0.10%, sys=0.00%, ctx=80, majf=0, minf=1 00:17:16.472 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.472 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.472 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.472 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.472 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1072079: Mon Jul 15 13:47:42 2024 00:17:16.472 read: IOPS=24, BW=95.9KiB/s (98.2kB/s)(264KiB/2754msec) 00:17:16.472 slat (nsec): min=25393, max=90583, avg=27125.61, stdev=7957.97 00:17:16.472 clat (usec): min=1283, max=42657, avg=41357.26, stdev=5009.63 00:17:16.472 lat (usec): min=1318, max=42684, avg=41384.38, stdev=5008.59 00:17:16.472 clat percentiles (usec): 00:17:16.472 | 1.00th=[ 1287], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:17:16.472 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:16.472 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:16.472 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:16.472 | 99.99th=[42730] 00:17:16.472 bw ( KiB/s): min= 96, max= 96, per=27.11%, avg=96.00, stdev= 0.00, samples=5 00:17:16.472 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:17:16.472 lat (msec) : 2=1.49%, 50=97.01% 00:17:16.472 cpu : usr=0.11%, sys=0.00%, ctx=69, majf=0, minf=1 00:17:16.472 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.472 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.472 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.472 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.472 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1072086: Mon Jul 15 13:47:42 2024 00:17:16.472 read: IOPS=24, BW=97.5KiB/s (99.9kB/s)(252KiB/2584msec) 00:17:16.472 slat (nsec): min=24005, max=36300, avg=24584.88, stdev=1595.47 00:17:16.472 clat (usec): min=1124, max=42087, avg=40638.85, stdev=7201.13 00:17:16.472 lat (usec): min=1153, max=42112, avg=40663.44, stdev=7199.68 00:17:16.472 clat percentiles (usec): 00:17:16.472 | 1.00th=[ 1123], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:17:16.472 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:16.472 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:16.472 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:16.472 | 99.99th=[42206] 00:17:16.472 bw ( KiB/s): min= 96, max= 104, per=27.39%, avg=97.60, stdev= 3.58, samples=5 00:17:16.472 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:17:16.472 lat (msec) : 2=3.12%, 50=95.31% 00:17:16.472 cpu : usr=0.08%, sys=0.00%, ctx=64, majf=0, minf=2 00:17:16.472 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.472 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.472 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.472 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.472 00:17:16.472 Run status group 0 (all jobs): 00:17:16.472 READ: bw=354KiB/s (363kB/s), 95.2KiB/s-97.5KiB/s (97.5kB/s-99.9kB/s), io=1092KiB (1118kB), run=2584-3084msec 00:17:16.472 00:17:16.472 Disk stats (read/write): 00:17:16.472 nvme0n1: ios=94/0, merge=0/0, ticks=3461/0, in_queue=3461, util=98.40% 00:17:16.472 nvme0n2: ios=68/0, merge=0/0, ticks=2809/0, in_queue=2809, util=95.33% 00:17:16.472 nvme0n3: ios=95/0, merge=0/0, ticks=3025/0, in_queue=3025, util=100.00% 00:17:16.472 nvme0n4: ios=57/0, merge=0/0, ticks=2310/0, in_queue=2310, util=96.02% 00:17:16.472 13:47:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:16.472 13:47:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:16.733 13:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:16.733 13:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:16.733 13:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:16.733 13:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:16.993 13:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:16.993 13:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:17.253 13:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:17.253 13:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1071712 00:17:17.253 13:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:17.253 13:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:17.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.253 13:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:17.253 13:47:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:17.253 13:47:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:17.253 13:47:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.253 13:47:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:17.253 13:47:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:17.253 13:47:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:17.253 13:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:17.253 13:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:17.253 nvmf hotplug test: fio failed as expected 00:17:17.253 13:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:17.514 rmmod nvme_tcp 00:17:17.514 rmmod nvme_fabrics 00:17:17.514 rmmod nvme_keyring 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1068190 ']' 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1068190 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1068190 ']' 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1068190 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1068190 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1068190' 00:17:17.514 killing process with pid 1068190 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1068190 00:17:17.514 13:47:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1068190 00:17:17.775 13:47:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:17.775 13:47:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:17.775 13:47:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:17.775 13:47:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:17.775 13:47:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:17.775 13:47:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.775 13:47:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.775 13:47:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.692 13:47:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:19.692 00:17:19.692 real 0m28.485s 00:17:19.692 user 2m30.050s 00:17:19.692 sys 0m8.807s 00:17:19.692 13:47:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:19.692 13:47:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.692 ************************************ 00:17:19.692 END TEST nvmf_fio_target 00:17:19.692 ************************************ 00:17:19.954 13:47:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:19.954 13:47:46 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:19.954 13:47:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:19.954 13:47:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:19.954 13:47:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:19.954 ************************************ 00:17:19.954 START TEST nvmf_bdevio 00:17:19.954 ************************************ 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:19.954 * Looking for test storage... 00:17:19.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:19.954 13:47:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:26.664 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:26.664 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:26.664 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:26.929 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:26.929 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:26.929 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:26.929 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:26.929 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:27.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:17:27.191 00:17:27.191 --- 10.0.0.2 ping statistics --- 00:17:27.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.191 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:27.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.389 ms 00:17:27.191 00:17:27.191 --- 10.0.0.1 ping statistics --- 00:17:27.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.191 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1077163 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1077163 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1077163 ']' 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:27.191 13:47:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:27.191 [2024-07-15 13:47:53.629241] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:27.191 [2024-07-15 13:47:53.629323] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.191 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.452 [2024-07-15 13:47:53.724274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:27.452 [2024-07-15 13:47:53.820116] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.452 [2024-07-15 13:47:53.820187] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.452 [2024-07-15 13:47:53.820195] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.452 [2024-07-15 13:47:53.820202] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.452 [2024-07-15 13:47:53.820208] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.452 [2024-07-15 13:47:53.820373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:27.452 [2024-07-15 13:47:53.820531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:27.452 [2024-07-15 13:47:53.820692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:27.452 [2024-07-15 13:47:53.820693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:28.024 [2024-07-15 13:47:54.474304] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:28.024 Malloc0 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:28.024 [2024-07-15 13:47:54.539808] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:28.024 13:47:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:28.024 { 00:17:28.024 "params": { 00:17:28.024 "name": "Nvme$subsystem", 00:17:28.024 "trtype": "$TEST_TRANSPORT", 00:17:28.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:28.024 "adrfam": "ipv4", 00:17:28.024 "trsvcid": "$NVMF_PORT", 00:17:28.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:28.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:28.024 "hdgst": ${hdgst:-false}, 00:17:28.024 "ddgst": ${ddgst:-false} 00:17:28.024 }, 00:17:28.024 "method": "bdev_nvme_attach_controller" 00:17:28.024 } 00:17:28.024 EOF 00:17:28.024 )") 00:17:28.284 13:47:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:28.284 13:47:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:28.284 13:47:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:28.284 13:47:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:28.284 "params": { 00:17:28.284 "name": "Nvme1", 00:17:28.284 "trtype": "tcp", 00:17:28.284 "traddr": "10.0.0.2", 00:17:28.284 "adrfam": "ipv4", 00:17:28.284 "trsvcid": "4420", 00:17:28.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:28.284 "hdgst": false, 00:17:28.284 "ddgst": false 00:17:28.284 }, 00:17:28.284 "method": "bdev_nvme_attach_controller" 00:17:28.284 }' 00:17:28.284 [2024-07-15 13:47:54.596315] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:17:28.284 [2024-07-15 13:47:54.596383] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1077256 ] 00:17:28.284 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.284 [2024-07-15 13:47:54.662546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:28.284 [2024-07-15 13:47:54.737929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.284 [2024-07-15 13:47:54.738045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.284 [2024-07-15 13:47:54.738049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.543 I/O targets: 00:17:28.543 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:28.543 00:17:28.543 00:17:28.543 CUnit - A unit testing framework for C - Version 2.1-3 00:17:28.543 http://cunit.sourceforge.net/ 00:17:28.543 00:17:28.543 00:17:28.543 Suite: bdevio tests on: Nvme1n1 00:17:28.543 Test: blockdev write read block ...passed 00:17:28.803 Test: blockdev write zeroes read block ...passed 00:17:28.803 Test: blockdev write zeroes read no split ...passed 00:17:28.803 Test: blockdev write zeroes read split ...passed 00:17:28.804 Test: blockdev write zeroes read split partial ...passed 00:17:28.804 Test: blockdev reset ...[2024-07-15 13:47:55.223599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:28.804 [2024-07-15 13:47:55.223675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c98ce0 (9): Bad file descriptor 00:17:28.804 [2024-07-15 13:47:55.240227] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:28.804 passed 00:17:28.804 Test: blockdev write read 8 blocks ...passed 00:17:28.804 Test: blockdev write read size > 128k ...passed 00:17:28.804 Test: blockdev write read invalid size ...passed 00:17:28.804 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:28.804 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:28.804 Test: blockdev write read max offset ...passed 00:17:29.063 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.063 Test: blockdev writev readv 8 blocks ...passed 00:17:29.063 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.063 Test: blockdev writev readv block ...passed 00:17:29.063 Test: blockdev writev readv size > 128k ...passed 00:17:29.063 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.063 Test: blockdev comparev and writev ...[2024-07-15 13:47:55.508266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.063 [2024-07-15 13:47:55.508292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.063 [2024-07-15 13:47:55.508303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.063 [2024-07-15 13:47:55.508309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:29.063 [2024-07-15 13:47:55.508875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.063 [2024-07-15 13:47:55.508883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:29.063 [2024-07-15 13:47:55.508893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.063 [2024-07-15 13:47:55.508898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:29.063 [2024-07-15 13:47:55.509454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.063 [2024-07-15 13:47:55.509463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:29.063 [2024-07-15 13:47:55.509472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.063 [2024-07-15 13:47:55.509478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:29.063 [2024-07-15 13:47:55.510044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.063 [2024-07-15 13:47:55.510052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:29.063 [2024-07-15 13:47:55.510061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.063 [2024-07-15 13:47:55.510067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:29.063 passed 00:17:29.324 Test: blockdev nvme passthru rw ...passed 00:17:29.324 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:47:55.595144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.324 [2024-07-15 13:47:55.595155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:29.324 [2024-07-15 13:47:55.595603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.324 [2024-07-15 13:47:55.595615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:29.324 [2024-07-15 13:47:55.596033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.324 [2024-07-15 13:47:55.596041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:29.324 [2024-07-15 13:47:55.596483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.324 [2024-07-15 13:47:55.596491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:29.324 passed 00:17:29.324 Test: blockdev nvme admin passthru ...passed 00:17:29.324 Test: blockdev copy ...passed 00:17:29.324 00:17:29.324 Run Summary: Type Total Ran Passed Failed Inactive 00:17:29.324 suites 1 1 n/a 0 0 00:17:29.324 tests 23 23 23 0 0 00:17:29.324 asserts 152 152 152 0 n/a 00:17:29.324 00:17:29.324 Elapsed time = 1.318 seconds 00:17:29.324 13:47:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:29.324 13:47:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.324 13:47:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:29.324 13:47:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.324 13:47:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:29.324 13:47:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:29.324 13:47:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:29.324 13:47:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:29.324 13:47:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:29.325 13:47:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:29.325 13:47:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:29.325 13:47:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:29.325 rmmod nvme_tcp 00:17:29.325 rmmod nvme_fabrics 00:17:29.325 rmmod nvme_keyring 00:17:29.325 13:47:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:29.325 13:47:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:29.325 13:47:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:29.325 13:47:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1077163 ']' 00:17:29.325 13:47:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1077163 00:17:29.325 13:47:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1077163 ']' 00:17:29.325 13:47:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1077163 00:17:29.325 13:47:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:29.584 13:47:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:29.584 13:47:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1077163 00:17:29.584 13:47:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:29.584 13:47:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:29.584 13:47:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1077163' 00:17:29.584 killing process with pid 1077163 00:17:29.584 13:47:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1077163 00:17:29.585 13:47:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1077163 00:17:29.585 13:47:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:29.585 13:47:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:29.585 13:47:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:29.585 13:47:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:29.585 13:47:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:29.585 13:47:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.585 13:47:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:29.585 13:47:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.121 13:47:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:32.121 00:17:32.121 real 0m11.855s 00:17:32.121 user 0m13.326s 00:17:32.121 sys 0m5.964s 00:17:32.121 13:47:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:32.121 13:47:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:32.121 ************************************ 00:17:32.121 END TEST nvmf_bdevio 00:17:32.121 ************************************ 00:17:32.121 13:47:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:32.121 13:47:58 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:32.121 13:47:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:32.121 13:47:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:32.121 13:47:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:32.121 ************************************ 00:17:32.121 START TEST nvmf_auth_target 00:17:32.121 ************************************ 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:32.121 * Looking for test storage... 00:17:32.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:32.121 13:47:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:38.701 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:38.701 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:38.701 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:38.701 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:38.701 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:38.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:17:38.961 00:17:38.961 --- 10.0.0.2 ping statistics --- 00:17:38.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.961 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:38.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:17:38.961 00:17:38.961 --- 10.0.0.1 ping statistics --- 00:17:38.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.961 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1081788 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1081788 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1081788 ']' 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.961 13:48:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1082040 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d63887e4b52bb0fefcd8b6d7cb5c103b70e325cef9052cb0 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.n3x 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d63887e4b52bb0fefcd8b6d7cb5c103b70e325cef9052cb0 0 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d63887e4b52bb0fefcd8b6d7cb5c103b70e325cef9052cb0 0 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d63887e4b52bb0fefcd8b6d7cb5c103b70e325cef9052cb0 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.n3x 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.n3x 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.n3x 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ff8a6bd4b911eb1291e6dbca4b18ba3ff4230e3960d9a86b94e89549a54c8f02 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Jfh 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ff8a6bd4b911eb1291e6dbca4b18ba3ff4230e3960d9a86b94e89549a54c8f02 3 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ff8a6bd4b911eb1291e6dbca4b18ba3ff4230e3960d9a86b94e89549a54c8f02 3 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ff8a6bd4b911eb1291e6dbca4b18ba3ff4230e3960d9a86b94e89549a54c8f02 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Jfh 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Jfh 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Jfh 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=aaf0eb415500a57a13e11d176d64be37 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Bob 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key aaf0eb415500a57a13e11d176d64be37 1 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 aaf0eb415500a57a13e11d176d64be37 1 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=aaf0eb415500a57a13e11d176d64be37 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:39.899 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Bob 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Bob 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Bob 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=21bd799ad25152ff2f2870ff1a53e410631f907e52e1cef7 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.yKK 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 21bd799ad25152ff2f2870ff1a53e410631f907e52e1cef7 2 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 21bd799ad25152ff2f2870ff1a53e410631f907e52e1cef7 2 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=21bd799ad25152ff2f2870ff1a53e410631f907e52e1cef7 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.yKK 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.yKK 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.yKK 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=45f0a6b66fe309db2078fdee7a3d6c175fe648b58ff9f2ae 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.vBT 00:17:40.160 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 45f0a6b66fe309db2078fdee7a3d6c175fe648b58ff9f2ae 2 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 45f0a6b66fe309db2078fdee7a3d6c175fe648b58ff9f2ae 2 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=45f0a6b66fe309db2078fdee7a3d6c175fe648b58ff9f2ae 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.vBT 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.vBT 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.vBT 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=40f201f1edd5d883fbc0646983a2c4d2 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.C5s 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 40f201f1edd5d883fbc0646983a2c4d2 1 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 40f201f1edd5d883fbc0646983a2c4d2 1 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=40f201f1edd5d883fbc0646983a2c4d2 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.C5s 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.C5s 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.C5s 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fc55f3c01fb1921096b137128dac502aeb3a98f5d44bf26f640d0aa2c5e070ce 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.LOp 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fc55f3c01fb1921096b137128dac502aeb3a98f5d44bf26f640d0aa2c5e070ce 3 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fc55f3c01fb1921096b137128dac502aeb3a98f5d44bf26f640d0aa2c5e070ce 3 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fc55f3c01fb1921096b137128dac502aeb3a98f5d44bf26f640d0aa2c5e070ce 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.LOp 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.LOp 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.LOp 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1081788 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1081788 ']' 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.161 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.421 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.421 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:40.421 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1082040 /var/tmp/host.sock 00:17:40.421 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1082040 ']' 00:17:40.421 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:40.421 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.421 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:40.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:40.421 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.421 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.682 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.682 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:40.682 13:48:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:40.682 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.682 13:48:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.682 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.682 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:40.682 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.n3x 00:17:40.682 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.682 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.682 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.682 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.n3x 00:17:40.682 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.n3x 00:17:40.682 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Jfh ]] 00:17:40.682 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Jfh 00:17:40.682 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.682 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.682 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.682 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Jfh 00:17:40.682 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Jfh 00:17:40.942 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:40.942 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Bob 00:17:40.942 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.942 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.942 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.942 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Bob 00:17:40.942 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Bob 00:17:41.203 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.yKK ]] 00:17:41.203 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yKK 00:17:41.203 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.203 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.203 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.203 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yKK 00:17:41.203 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yKK 00:17:41.203 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:41.203 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.vBT 00:17:41.203 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.203 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.203 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.203 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.vBT 00:17:41.203 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.vBT 00:17:41.463 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.C5s ]] 00:17:41.463 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C5s 00:17:41.463 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.463 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.463 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.463 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C5s 00:17:41.463 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C5s 00:17:41.463 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:41.463 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.LOp 00:17:41.463 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.463 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.463 13:48:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.463 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.LOp 00:17:41.463 13:48:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.LOp 00:17:41.723 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:41.723 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:41.723 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.723 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.723 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:41.723 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:41.984 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:41.984 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.984 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.984 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:41.984 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:41.984 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.984 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.984 13:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.984 13:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.984 13:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.984 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.984 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.984 00:17:42.245 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.245 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.245 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.245 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.245 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.245 13:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.245 13:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.245 13:48:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.245 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.245 { 00:17:42.245 "cntlid": 1, 00:17:42.245 "qid": 0, 00:17:42.245 "state": "enabled", 00:17:42.245 "thread": "nvmf_tgt_poll_group_000", 00:17:42.245 "listen_address": { 00:17:42.245 "trtype": "TCP", 00:17:42.245 "adrfam": "IPv4", 00:17:42.245 "traddr": "10.0.0.2", 00:17:42.245 "trsvcid": "4420" 00:17:42.245 }, 00:17:42.245 "peer_address": { 00:17:42.245 "trtype": "TCP", 00:17:42.245 "adrfam": "IPv4", 00:17:42.245 "traddr": "10.0.0.1", 00:17:42.245 "trsvcid": "35890" 00:17:42.245 }, 00:17:42.245 "auth": { 00:17:42.245 "state": "completed", 00:17:42.245 "digest": "sha256", 00:17:42.245 "dhgroup": "null" 00:17:42.245 } 00:17:42.245 } 00:17:42.245 ]' 00:17:42.245 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.245 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.245 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.505 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:42.505 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.505 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.505 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.505 13:48:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.505 13:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDYzODg3ZTRiNTJiYjBmZWZjZDhiNmQ3Y2I1YzEwM2I3MGUzMjVjZWY5MDUyY2IwXjPuNA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YTZiZDRiOTExZWIxMjkxZTZkYmNhNGIxOGJhM2ZmNDIzMGUzOTYwZDlhODZiOTRlODk1NDlhNTRjOGYwMl0Cku4=: 00:17:43.445 13:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.445 13:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.445 13:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.445 13:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.445 13:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.445 13:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.445 13:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:43.445 13:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:43.445 13:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:43.445 13:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.445 13:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:43.445 13:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:43.445 13:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:43.445 13:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.445 13:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.445 13:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.445 13:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.445 13:48:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.446 13:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.446 13:48:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.706 00:17:43.706 13:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.706 13:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.706 13:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.966 13:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.966 13:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.966 13:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.966 13:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.966 13:48:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.966 13:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.966 { 00:17:43.966 "cntlid": 3, 00:17:43.966 "qid": 0, 00:17:43.966 "state": "enabled", 00:17:43.966 "thread": "nvmf_tgt_poll_group_000", 00:17:43.966 "listen_address": { 00:17:43.966 "trtype": "TCP", 00:17:43.966 "adrfam": "IPv4", 00:17:43.966 "traddr": "10.0.0.2", 00:17:43.966 "trsvcid": "4420" 00:17:43.966 }, 00:17:43.966 "peer_address": { 00:17:43.966 "trtype": "TCP", 00:17:43.966 "adrfam": "IPv4", 00:17:43.966 "traddr": "10.0.0.1", 00:17:43.966 "trsvcid": "35908" 00:17:43.966 }, 00:17:43.966 "auth": { 00:17:43.966 "state": "completed", 00:17:43.966 "digest": "sha256", 00:17:43.966 "dhgroup": "null" 00:17:43.966 } 00:17:43.966 } 00:17:43.966 ]' 00:17:43.966 13:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.966 13:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.966 13:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.966 13:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:43.966 13:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.966 13:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.966 13:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.226 13:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.226 13:48:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWFmMGViNDE1NTAwYTU3YTEzZTExZDE3NmQ2NGJlMze1D3Ss: --dhchap-ctrl-secret DHHC-1:02:MjFiZDc5OWFkMjUxNTJmZjJmMjg3MGZmMWE1M2U0MTA2MzFmOTA3ZTUyZTFjZWY324H40Q==: 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.166 13:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.425 00:17:45.425 13:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.425 13:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.425 13:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.703 13:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.703 13:48:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.703 13:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.703 13:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.703 13:48:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.703 13:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.703 { 00:17:45.703 "cntlid": 5, 00:17:45.703 "qid": 0, 00:17:45.703 "state": "enabled", 00:17:45.703 "thread": "nvmf_tgt_poll_group_000", 00:17:45.703 "listen_address": { 00:17:45.703 "trtype": "TCP", 00:17:45.703 "adrfam": "IPv4", 00:17:45.703 "traddr": "10.0.0.2", 00:17:45.703 "trsvcid": "4420" 00:17:45.703 }, 00:17:45.703 "peer_address": { 00:17:45.703 "trtype": "TCP", 00:17:45.703 "adrfam": "IPv4", 00:17:45.703 "traddr": "10.0.0.1", 00:17:45.703 "trsvcid": "35922" 00:17:45.703 }, 00:17:45.703 "auth": { 00:17:45.703 "state": "completed", 00:17:45.703 "digest": "sha256", 00:17:45.703 "dhgroup": "null" 00:17:45.703 } 00:17:45.703 } 00:17:45.703 ]' 00:17:45.703 13:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.703 13:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.703 13:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.703 13:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:45.703 13:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.703 13:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.703 13:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.703 13:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.009 13:48:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NDVmMGE2YjY2ZmUzMDlkYjIwNzhmZGVlN2EzZDZjMTc1ZmU2NDhiNThmZjlmMmFl/m3xsw==: --dhchap-ctrl-secret DHHC-1:01:NDBmMjAxZjFlZGQ1ZDg4M2ZiYzA2NDY5ODNhMmM0ZDJalygU: 00:17:46.579 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.579 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.579 13:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.579 13:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.579 13:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.579 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.579 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:46.579 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:46.839 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:46.839 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.839 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:46.839 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:46.839 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:46.839 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.839 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:46.839 13:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.839 13:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.839 13:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.839 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.839 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:47.099 00:17:47.099 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.099 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.099 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.358 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.358 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.358 13:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.358 13:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.358 13:48:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.358 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.358 { 00:17:47.358 "cntlid": 7, 00:17:47.358 "qid": 0, 00:17:47.358 "state": "enabled", 00:17:47.358 "thread": "nvmf_tgt_poll_group_000", 00:17:47.358 "listen_address": { 00:17:47.358 "trtype": "TCP", 00:17:47.358 "adrfam": "IPv4", 00:17:47.358 "traddr": "10.0.0.2", 00:17:47.359 "trsvcid": "4420" 00:17:47.359 }, 00:17:47.359 "peer_address": { 00:17:47.359 "trtype": "TCP", 00:17:47.359 "adrfam": "IPv4", 00:17:47.359 "traddr": "10.0.0.1", 00:17:47.359 "trsvcid": "35956" 00:17:47.359 }, 00:17:47.359 "auth": { 00:17:47.359 "state": "completed", 00:17:47.359 "digest": "sha256", 00:17:47.359 "dhgroup": "null" 00:17:47.359 } 00:17:47.359 } 00:17:47.359 ]' 00:17:47.359 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.359 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.359 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.359 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:47.359 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.359 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.359 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.359 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.619 13:48:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZmM1NWYzYzAxZmIxOTIxMDk2YjEzNzEyOGRhYzUwMmFlYjNhOThmNWQ0NGJmMjZmNjQwZDBhYTJjNWUwNzBjZa+XlBc=: 00:17:48.190 13:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.190 13:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.190 13:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.190 13:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.190 13:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.190 13:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.190 13:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.190 13:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:48.190 13:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:48.449 13:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:48.449 13:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.449 13:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:48.449 13:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:48.449 13:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:48.449 13:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.449 13:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.449 13:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.449 13:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.449 13:48:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.449 13:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.449 13:48:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.709 00:17:48.709 13:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.709 13:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.709 13:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.968 13:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.968 13:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.968 13:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.968 13:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.968 13:48:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.968 13:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.968 { 00:17:48.968 "cntlid": 9, 00:17:48.968 "qid": 0, 00:17:48.968 "state": "enabled", 00:17:48.968 "thread": "nvmf_tgt_poll_group_000", 00:17:48.968 "listen_address": { 00:17:48.968 "trtype": "TCP", 00:17:48.968 "adrfam": "IPv4", 00:17:48.968 "traddr": "10.0.0.2", 00:17:48.968 "trsvcid": "4420" 00:17:48.968 }, 00:17:48.968 "peer_address": { 00:17:48.968 "trtype": "TCP", 00:17:48.968 "adrfam": "IPv4", 00:17:48.968 "traddr": "10.0.0.1", 00:17:48.968 "trsvcid": "35984" 00:17:48.968 }, 00:17:48.968 "auth": { 00:17:48.968 "state": "completed", 00:17:48.968 "digest": "sha256", 00:17:48.968 "dhgroup": "ffdhe2048" 00:17:48.968 } 00:17:48.968 } 00:17:48.968 ]' 00:17:48.968 13:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.968 13:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.968 13:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.968 13:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.968 13:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.968 13:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.968 13:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.968 13:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.228 13:48:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDYzODg3ZTRiNTJiYjBmZWZjZDhiNmQ3Y2I1YzEwM2I3MGUzMjVjZWY5MDUyY2IwXjPuNA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YTZiZDRiOTExZWIxMjkxZTZkYmNhNGIxOGJhM2ZmNDIzMGUzOTYwZDlhODZiOTRlODk1NDlhNTRjOGYwMl0Cku4=: 00:17:50.166 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.166 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.167 13:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.167 13:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.167 13:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.167 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.167 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:50.167 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:50.167 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:50.167 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.167 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:50.167 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:50.167 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:50.167 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.167 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.167 13:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.167 13:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.167 13:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.167 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.167 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.426 00:17:50.426 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.426 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.426 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.426 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.426 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.426 13:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.426 13:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.426 13:48:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.426 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.426 { 00:17:50.426 "cntlid": 11, 00:17:50.426 "qid": 0, 00:17:50.426 "state": "enabled", 00:17:50.426 "thread": "nvmf_tgt_poll_group_000", 00:17:50.426 "listen_address": { 00:17:50.426 "trtype": "TCP", 00:17:50.426 "adrfam": "IPv4", 00:17:50.426 "traddr": "10.0.0.2", 00:17:50.426 "trsvcid": "4420" 00:17:50.426 }, 00:17:50.426 "peer_address": { 00:17:50.426 "trtype": "TCP", 00:17:50.426 "adrfam": "IPv4", 00:17:50.426 "traddr": "10.0.0.1", 00:17:50.426 "trsvcid": "36010" 00:17:50.426 }, 00:17:50.426 "auth": { 00:17:50.426 "state": "completed", 00:17:50.426 "digest": "sha256", 00:17:50.426 "dhgroup": "ffdhe2048" 00:17:50.426 } 00:17:50.426 } 00:17:50.426 ]' 00:17:50.426 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.685 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.686 13:48:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.686 13:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:50.686 13:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.686 13:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.686 13:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.686 13:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.945 13:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWFmMGViNDE1NTAwYTU3YTEzZTExZDE3NmQ2NGJlMze1D3Ss: --dhchap-ctrl-secret DHHC-1:02:MjFiZDc5OWFkMjUxNTJmZjJmMjg3MGZmMWE1M2U0MTA2MzFmOTA3ZTUyZTFjZWY324H40Q==: 00:17:51.515 13:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.515 13:48:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.515 13:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.515 13:48:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.515 13:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.515 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.515 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:51.515 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:51.776 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:51.776 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.776 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:51.776 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:51.776 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:51.776 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.776 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.776 13:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.776 13:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.776 13:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.776 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.776 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.037 00:17:52.037 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.037 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.037 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.297 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.297 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.297 13:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.297 13:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.297 13:48:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.297 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.297 { 00:17:52.297 "cntlid": 13, 00:17:52.297 "qid": 0, 00:17:52.297 "state": "enabled", 00:17:52.297 "thread": "nvmf_tgt_poll_group_000", 00:17:52.297 "listen_address": { 00:17:52.297 "trtype": "TCP", 00:17:52.297 "adrfam": "IPv4", 00:17:52.297 "traddr": "10.0.0.2", 00:17:52.297 "trsvcid": "4420" 00:17:52.297 }, 00:17:52.297 "peer_address": { 00:17:52.297 "trtype": "TCP", 00:17:52.297 "adrfam": "IPv4", 00:17:52.297 "traddr": "10.0.0.1", 00:17:52.297 "trsvcid": "50884" 00:17:52.297 }, 00:17:52.297 "auth": { 00:17:52.297 "state": "completed", 00:17:52.297 "digest": "sha256", 00:17:52.297 "dhgroup": "ffdhe2048" 00:17:52.297 } 00:17:52.297 } 00:17:52.297 ]' 00:17:52.297 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.297 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.297 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.297 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:52.297 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.297 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.297 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.297 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.557 13:48:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NDVmMGE2YjY2ZmUzMDlkYjIwNzhmZGVlN2EzZDZjMTc1ZmU2NDhiNThmZjlmMmFl/m3xsw==: --dhchap-ctrl-secret DHHC-1:01:NDBmMjAxZjFlZGQ1ZDg4M2ZiYzA2NDY5ODNhMmM0ZDJalygU: 00:17:53.126 13:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.126 13:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.126 13:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.126 13:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.126 13:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.126 13:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.126 13:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:53.386 13:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:53.386 13:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:53.386 13:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.386 13:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:53.386 13:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:53.386 13:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:53.386 13:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.386 13:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:53.386 13:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.386 13:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.386 13:48:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.386 13:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.386 13:48:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.647 00:17:53.647 13:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.648 13:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.648 13:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.909 13:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.909 13:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.909 13:48:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.909 13:48:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.909 13:48:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.909 13:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.909 { 00:17:53.909 "cntlid": 15, 00:17:53.909 "qid": 0, 00:17:53.909 "state": "enabled", 00:17:53.909 "thread": "nvmf_tgt_poll_group_000", 00:17:53.909 "listen_address": { 00:17:53.909 "trtype": "TCP", 00:17:53.909 "adrfam": "IPv4", 00:17:53.909 "traddr": "10.0.0.2", 00:17:53.909 "trsvcid": "4420" 00:17:53.909 }, 00:17:53.909 "peer_address": { 00:17:53.909 "trtype": "TCP", 00:17:53.909 "adrfam": "IPv4", 00:17:53.909 "traddr": "10.0.0.1", 00:17:53.909 "trsvcid": "50902" 00:17:53.909 }, 00:17:53.909 "auth": { 00:17:53.909 "state": "completed", 00:17:53.909 "digest": "sha256", 00:17:53.909 "dhgroup": "ffdhe2048" 00:17:53.909 } 00:17:53.909 } 00:17:53.909 ]' 00:17:53.909 13:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.909 13:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.909 13:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.909 13:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:53.909 13:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.909 13:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.909 13:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.909 13:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.170 13:48:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZmM1NWYzYzAxZmIxOTIxMDk2YjEzNzEyOGRhYzUwMmFlYjNhOThmNWQ0NGJmMjZmNjQwZDBhYTJjNWUwNzBjZa+XlBc=: 00:17:55.109 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.109 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.109 13:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.109 13:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.109 13:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.109 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.109 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.109 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:55.109 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:55.109 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:55.109 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.109 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:55.109 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:55.109 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:55.109 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.109 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.109 13:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.110 13:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.110 13:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.110 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.110 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.369 00:17:55.369 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.369 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.369 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.369 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.369 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.369 13:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.369 13:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.369 13:48:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.369 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.369 { 00:17:55.369 "cntlid": 17, 00:17:55.369 "qid": 0, 00:17:55.369 "state": "enabled", 00:17:55.369 "thread": "nvmf_tgt_poll_group_000", 00:17:55.369 "listen_address": { 00:17:55.369 "trtype": "TCP", 00:17:55.369 "adrfam": "IPv4", 00:17:55.369 "traddr": "10.0.0.2", 00:17:55.369 "trsvcid": "4420" 00:17:55.369 }, 00:17:55.369 "peer_address": { 00:17:55.369 "trtype": "TCP", 00:17:55.369 "adrfam": "IPv4", 00:17:55.369 "traddr": "10.0.0.1", 00:17:55.369 "trsvcid": "50934" 00:17:55.369 }, 00:17:55.369 "auth": { 00:17:55.369 "state": "completed", 00:17:55.369 "digest": "sha256", 00:17:55.369 "dhgroup": "ffdhe3072" 00:17:55.369 } 00:17:55.369 } 00:17:55.369 ]' 00:17:55.630 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.630 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.630 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.630 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:55.630 13:48:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.630 13:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.630 13:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.630 13:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.891 13:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDYzODg3ZTRiNTJiYjBmZWZjZDhiNmQ3Y2I1YzEwM2I3MGUzMjVjZWY5MDUyY2IwXjPuNA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YTZiZDRiOTExZWIxMjkxZTZkYmNhNGIxOGJhM2ZmNDIzMGUzOTYwZDlhODZiOTRlODk1NDlhNTRjOGYwMl0Cku4=: 00:17:56.461 13:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.461 13:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.461 13:48:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.462 13:48:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.462 13:48:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.462 13:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.462 13:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:56.462 13:48:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:56.721 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:56.721 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.721 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.721 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:56.721 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:56.721 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.721 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.721 13:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.721 13:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.721 13:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.721 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.721 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.980 00:17:56.980 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.980 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.980 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.240 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.240 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.240 13:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.240 13:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.240 13:48:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.240 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.240 { 00:17:57.240 "cntlid": 19, 00:17:57.240 "qid": 0, 00:17:57.240 "state": "enabled", 00:17:57.240 "thread": "nvmf_tgt_poll_group_000", 00:17:57.240 "listen_address": { 00:17:57.240 "trtype": "TCP", 00:17:57.240 "adrfam": "IPv4", 00:17:57.240 "traddr": "10.0.0.2", 00:17:57.240 "trsvcid": "4420" 00:17:57.240 }, 00:17:57.240 "peer_address": { 00:17:57.240 "trtype": "TCP", 00:17:57.240 "adrfam": "IPv4", 00:17:57.240 "traddr": "10.0.0.1", 00:17:57.240 "trsvcid": "50970" 00:17:57.240 }, 00:17:57.240 "auth": { 00:17:57.240 "state": "completed", 00:17:57.240 "digest": "sha256", 00:17:57.240 "dhgroup": "ffdhe3072" 00:17:57.240 } 00:17:57.240 } 00:17:57.240 ]' 00:17:57.240 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.240 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.240 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.240 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:57.240 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.240 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.240 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.240 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.500 13:48:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWFmMGViNDE1NTAwYTU3YTEzZTExZDE3NmQ2NGJlMze1D3Ss: --dhchap-ctrl-secret DHHC-1:02:MjFiZDc5OWFkMjUxNTJmZjJmMjg3MGZmMWE1M2U0MTA2MzFmOTA3ZTUyZTFjZWY324H40Q==: 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.440 13:48:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.698 00:17:58.698 13:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.698 13:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.698 13:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.698 13:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.699 13:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.699 13:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.699 13:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.699 13:48:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.699 13:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.699 { 00:17:58.699 "cntlid": 21, 00:17:58.699 "qid": 0, 00:17:58.699 "state": "enabled", 00:17:58.699 "thread": "nvmf_tgt_poll_group_000", 00:17:58.699 "listen_address": { 00:17:58.699 "trtype": "TCP", 00:17:58.699 "adrfam": "IPv4", 00:17:58.699 "traddr": "10.0.0.2", 00:17:58.699 "trsvcid": "4420" 00:17:58.699 }, 00:17:58.699 "peer_address": { 00:17:58.699 "trtype": "TCP", 00:17:58.699 "adrfam": "IPv4", 00:17:58.699 "traddr": "10.0.0.1", 00:17:58.699 "trsvcid": "50992" 00:17:58.699 }, 00:17:58.699 "auth": { 00:17:58.699 "state": "completed", 00:17:58.699 "digest": "sha256", 00:17:58.699 "dhgroup": "ffdhe3072" 00:17:58.699 } 00:17:58.699 } 00:17:58.699 ]' 00:17:58.699 13:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.958 13:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.958 13:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.958 13:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:58.958 13:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.958 13:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.958 13:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.958 13:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.218 13:48:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NDVmMGE2YjY2ZmUzMDlkYjIwNzhmZGVlN2EzZDZjMTc1ZmU2NDhiNThmZjlmMmFl/m3xsw==: --dhchap-ctrl-secret DHHC-1:01:NDBmMjAxZjFlZGQ1ZDg4M2ZiYzA2NDY5ODNhMmM0ZDJalygU: 00:17:59.789 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.789 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.789 13:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.789 13:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.789 13:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.789 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.789 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:59.789 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:00.049 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:00.049 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.049 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.049 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:00.049 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:00.049 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.049 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:00.049 13:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.049 13:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.049 13:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.049 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.049 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.310 00:18:00.310 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.310 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.310 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.599 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.599 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.599 13:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.599 13:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.599 13:48:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.599 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.599 { 00:18:00.599 "cntlid": 23, 00:18:00.599 "qid": 0, 00:18:00.599 "state": "enabled", 00:18:00.599 "thread": "nvmf_tgt_poll_group_000", 00:18:00.599 "listen_address": { 00:18:00.599 "trtype": "TCP", 00:18:00.599 "adrfam": "IPv4", 00:18:00.599 "traddr": "10.0.0.2", 00:18:00.599 "trsvcid": "4420" 00:18:00.599 }, 00:18:00.599 "peer_address": { 00:18:00.599 "trtype": "TCP", 00:18:00.599 "adrfam": "IPv4", 00:18:00.599 "traddr": "10.0.0.1", 00:18:00.599 "trsvcid": "51012" 00:18:00.599 }, 00:18:00.599 "auth": { 00:18:00.599 "state": "completed", 00:18:00.599 "digest": "sha256", 00:18:00.599 "dhgroup": "ffdhe3072" 00:18:00.599 } 00:18:00.599 } 00:18:00.599 ]' 00:18:00.599 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.599 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.599 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.599 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:00.599 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.599 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.599 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.599 13:48:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.887 13:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZmM1NWYzYzAxZmIxOTIxMDk2YjEzNzEyOGRhYzUwMmFlYjNhOThmNWQ0NGJmMjZmNjQwZDBhYTJjNWUwNzBjZa+XlBc=: 00:18:01.458 13:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.458 13:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.458 13:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.458 13:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.458 13:48:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.458 13:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.458 13:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.458 13:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:01.458 13:48:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:01.719 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:01.719 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.719 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:01.719 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:01.719 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:01.719 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.719 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.719 13:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.719 13:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.719 13:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.719 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.719 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.980 00:18:01.980 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.980 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.980 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.240 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.240 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.240 13:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.240 13:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.240 13:48:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.240 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.240 { 00:18:02.240 "cntlid": 25, 00:18:02.240 "qid": 0, 00:18:02.240 "state": "enabled", 00:18:02.240 "thread": "nvmf_tgt_poll_group_000", 00:18:02.240 "listen_address": { 00:18:02.240 "trtype": "TCP", 00:18:02.240 "adrfam": "IPv4", 00:18:02.240 "traddr": "10.0.0.2", 00:18:02.240 "trsvcid": "4420" 00:18:02.240 }, 00:18:02.240 "peer_address": { 00:18:02.240 "trtype": "TCP", 00:18:02.240 "adrfam": "IPv4", 00:18:02.240 "traddr": "10.0.0.1", 00:18:02.240 "trsvcid": "60250" 00:18:02.240 }, 00:18:02.240 "auth": { 00:18:02.240 "state": "completed", 00:18:02.240 "digest": "sha256", 00:18:02.240 "dhgroup": "ffdhe4096" 00:18:02.240 } 00:18:02.240 } 00:18:02.240 ]' 00:18:02.240 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.240 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.240 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.240 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:02.240 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.240 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.240 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.240 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.501 13:48:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDYzODg3ZTRiNTJiYjBmZWZjZDhiNmQ3Y2I1YzEwM2I3MGUzMjVjZWY5MDUyY2IwXjPuNA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YTZiZDRiOTExZWIxMjkxZTZkYmNhNGIxOGJhM2ZmNDIzMGUzOTYwZDlhODZiOTRlODk1NDlhNTRjOGYwMl0Cku4=: 00:18:03.071 13:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.071 13:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.071 13:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.071 13:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.071 13:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.071 13:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.071 13:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:03.071 13:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:03.332 13:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:03.332 13:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.332 13:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:03.332 13:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:03.332 13:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:03.332 13:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.332 13:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.332 13:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.332 13:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.332 13:48:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.332 13:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.332 13:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.592 00:18:03.592 13:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.592 13:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.592 13:48:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.853 13:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.853 13:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.853 13:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.853 13:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.853 13:48:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.853 13:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.853 { 00:18:03.853 "cntlid": 27, 00:18:03.853 "qid": 0, 00:18:03.853 "state": "enabled", 00:18:03.853 "thread": "nvmf_tgt_poll_group_000", 00:18:03.853 "listen_address": { 00:18:03.853 "trtype": "TCP", 00:18:03.853 "adrfam": "IPv4", 00:18:03.853 "traddr": "10.0.0.2", 00:18:03.853 "trsvcid": "4420" 00:18:03.853 }, 00:18:03.853 "peer_address": { 00:18:03.853 "trtype": "TCP", 00:18:03.853 "adrfam": "IPv4", 00:18:03.853 "traddr": "10.0.0.1", 00:18:03.853 "trsvcid": "60282" 00:18:03.853 }, 00:18:03.853 "auth": { 00:18:03.853 "state": "completed", 00:18:03.853 "digest": "sha256", 00:18:03.853 "dhgroup": "ffdhe4096" 00:18:03.853 } 00:18:03.853 } 00:18:03.853 ]' 00:18:03.853 13:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.853 13:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.853 13:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.853 13:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:03.853 13:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.853 13:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.853 13:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.853 13:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.114 13:48:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWFmMGViNDE1NTAwYTU3YTEzZTExZDE3NmQ2NGJlMze1D3Ss: --dhchap-ctrl-secret DHHC-1:02:MjFiZDc5OWFkMjUxNTJmZjJmMjg3MGZmMWE1M2U0MTA2MzFmOTA3ZTUyZTFjZWY324H40Q==: 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.054 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.314 00:18:05.314 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.314 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.314 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.575 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.575 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.575 13:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.575 13:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.575 13:48:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.575 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.575 { 00:18:05.575 "cntlid": 29, 00:18:05.575 "qid": 0, 00:18:05.575 "state": "enabled", 00:18:05.575 "thread": "nvmf_tgt_poll_group_000", 00:18:05.575 "listen_address": { 00:18:05.575 "trtype": "TCP", 00:18:05.575 "adrfam": "IPv4", 00:18:05.575 "traddr": "10.0.0.2", 00:18:05.575 "trsvcid": "4420" 00:18:05.575 }, 00:18:05.575 "peer_address": { 00:18:05.575 "trtype": "TCP", 00:18:05.575 "adrfam": "IPv4", 00:18:05.575 "traddr": "10.0.0.1", 00:18:05.575 "trsvcid": "60306" 00:18:05.575 }, 00:18:05.575 "auth": { 00:18:05.575 "state": "completed", 00:18:05.575 "digest": "sha256", 00:18:05.575 "dhgroup": "ffdhe4096" 00:18:05.575 } 00:18:05.575 } 00:18:05.575 ]' 00:18:05.575 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.575 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.575 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.575 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.575 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.575 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.575 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.575 13:48:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.835 13:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NDVmMGE2YjY2ZmUzMDlkYjIwNzhmZGVlN2EzZDZjMTc1ZmU2NDhiNThmZjlmMmFl/m3xsw==: --dhchap-ctrl-secret DHHC-1:01:NDBmMjAxZjFlZGQ1ZDg4M2ZiYzA2NDY5ODNhMmM0ZDJalygU: 00:18:06.405 13:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.405 13:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.405 13:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.405 13:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.666 13:48:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.666 13:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.666 13:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:06.666 13:48:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:06.666 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:06.666 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.666 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.666 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:06.666 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:06.666 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.666 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:06.666 13:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.666 13:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.666 13:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.666 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.666 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.927 00:18:06.927 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.927 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.927 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.187 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.187 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.187 13:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.187 13:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.187 13:48:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.187 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.187 { 00:18:07.187 "cntlid": 31, 00:18:07.187 "qid": 0, 00:18:07.187 "state": "enabled", 00:18:07.187 "thread": "nvmf_tgt_poll_group_000", 00:18:07.187 "listen_address": { 00:18:07.187 "trtype": "TCP", 00:18:07.187 "adrfam": "IPv4", 00:18:07.187 "traddr": "10.0.0.2", 00:18:07.187 "trsvcid": "4420" 00:18:07.187 }, 00:18:07.187 "peer_address": { 00:18:07.187 "trtype": "TCP", 00:18:07.187 "adrfam": "IPv4", 00:18:07.187 "traddr": "10.0.0.1", 00:18:07.187 "trsvcid": "60336" 00:18:07.187 }, 00:18:07.187 "auth": { 00:18:07.187 "state": "completed", 00:18:07.187 "digest": "sha256", 00:18:07.187 "dhgroup": "ffdhe4096" 00:18:07.187 } 00:18:07.187 } 00:18:07.187 ]' 00:18:07.187 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.187 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.187 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.187 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:07.187 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.187 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.187 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.187 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.447 13:48:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZmM1NWYzYzAxZmIxOTIxMDk2YjEzNzEyOGRhYzUwMmFlYjNhOThmNWQ0NGJmMjZmNjQwZDBhYTJjNWUwNzBjZa+XlBc=: 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.388 13:48:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.648 00:18:08.909 13:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.909 13:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.909 13:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.909 13:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.909 13:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.909 13:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.909 13:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.909 13:48:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.909 13:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.909 { 00:18:08.909 "cntlid": 33, 00:18:08.909 "qid": 0, 00:18:08.909 "state": "enabled", 00:18:08.909 "thread": "nvmf_tgt_poll_group_000", 00:18:08.909 "listen_address": { 00:18:08.909 "trtype": "TCP", 00:18:08.909 "adrfam": "IPv4", 00:18:08.909 "traddr": "10.0.0.2", 00:18:08.909 "trsvcid": "4420" 00:18:08.909 }, 00:18:08.909 "peer_address": { 00:18:08.909 "trtype": "TCP", 00:18:08.909 "adrfam": "IPv4", 00:18:08.909 "traddr": "10.0.0.1", 00:18:08.909 "trsvcid": "60374" 00:18:08.909 }, 00:18:08.909 "auth": { 00:18:08.909 "state": "completed", 00:18:08.909 "digest": "sha256", 00:18:08.909 "dhgroup": "ffdhe6144" 00:18:08.909 } 00:18:08.909 } 00:18:08.909 ]' 00:18:08.909 13:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.909 13:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.909 13:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.170 13:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:09.170 13:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.170 13:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.170 13:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.170 13:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.170 13:48:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDYzODg3ZTRiNTJiYjBmZWZjZDhiNmQ3Y2I1YzEwM2I3MGUzMjVjZWY5MDUyY2IwXjPuNA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YTZiZDRiOTExZWIxMjkxZTZkYmNhNGIxOGJhM2ZmNDIzMGUzOTYwZDlhODZiOTRlODk1NDlhNTRjOGYwMl0Cku4=: 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.111 13:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.683 00:18:10.683 13:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.683 13:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.683 13:48:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.683 13:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.683 13:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.683 13:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.683 13:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.683 13:48:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.683 13:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.683 { 00:18:10.683 "cntlid": 35, 00:18:10.683 "qid": 0, 00:18:10.683 "state": "enabled", 00:18:10.683 "thread": "nvmf_tgt_poll_group_000", 00:18:10.683 "listen_address": { 00:18:10.683 "trtype": "TCP", 00:18:10.683 "adrfam": "IPv4", 00:18:10.683 "traddr": "10.0.0.2", 00:18:10.683 "trsvcid": "4420" 00:18:10.683 }, 00:18:10.683 "peer_address": { 00:18:10.683 "trtype": "TCP", 00:18:10.683 "adrfam": "IPv4", 00:18:10.683 "traddr": "10.0.0.1", 00:18:10.683 "trsvcid": "60396" 00:18:10.683 }, 00:18:10.683 "auth": { 00:18:10.683 "state": "completed", 00:18:10.683 "digest": "sha256", 00:18:10.683 "dhgroup": "ffdhe6144" 00:18:10.683 } 00:18:10.683 } 00:18:10.683 ]' 00:18:10.683 13:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.683 13:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.683 13:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.944 13:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.944 13:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.944 13:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.944 13:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.944 13:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.944 13:48:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWFmMGViNDE1NTAwYTU3YTEzZTExZDE3NmQ2NGJlMze1D3Ss: --dhchap-ctrl-secret DHHC-1:02:MjFiZDc5OWFkMjUxNTJmZjJmMjg3MGZmMWE1M2U0MTA2MzFmOTA3ZTUyZTFjZWY324H40Q==: 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.885 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.455 00:18:12.455 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.455 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.455 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.455 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.455 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.455 13:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.455 13:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.455 13:48:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.455 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.455 { 00:18:12.455 "cntlid": 37, 00:18:12.455 "qid": 0, 00:18:12.455 "state": "enabled", 00:18:12.455 "thread": "nvmf_tgt_poll_group_000", 00:18:12.455 "listen_address": { 00:18:12.455 "trtype": "TCP", 00:18:12.455 "adrfam": "IPv4", 00:18:12.455 "traddr": "10.0.0.2", 00:18:12.455 "trsvcid": "4420" 00:18:12.455 }, 00:18:12.455 "peer_address": { 00:18:12.455 "trtype": "TCP", 00:18:12.455 "adrfam": "IPv4", 00:18:12.455 "traddr": "10.0.0.1", 00:18:12.455 "trsvcid": "56982" 00:18:12.455 }, 00:18:12.455 "auth": { 00:18:12.455 "state": "completed", 00:18:12.455 "digest": "sha256", 00:18:12.455 "dhgroup": "ffdhe6144" 00:18:12.455 } 00:18:12.455 } 00:18:12.455 ]' 00:18:12.455 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.455 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.455 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.455 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:12.455 13:48:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.715 13:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.715 13:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.715 13:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.715 13:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NDVmMGE2YjY2ZmUzMDlkYjIwNzhmZGVlN2EzZDZjMTc1ZmU2NDhiNThmZjlmMmFl/m3xsw==: --dhchap-ctrl-secret DHHC-1:01:NDBmMjAxZjFlZGQ1ZDg4M2ZiYzA2NDY5ODNhMmM0ZDJalygU: 00:18:13.655 13:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.655 13:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.655 13:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.655 13:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.655 13:48:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.655 13:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.655 13:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:13.655 13:48:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:13.656 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:13.656 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.656 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.656 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:13.656 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:13.656 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.656 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:13.656 13:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.656 13:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.656 13:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.656 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.656 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.225 00:18:14.225 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.225 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.225 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.225 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.225 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.225 13:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.225 13:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.225 13:48:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.225 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.225 { 00:18:14.225 "cntlid": 39, 00:18:14.225 "qid": 0, 00:18:14.225 "state": "enabled", 00:18:14.225 "thread": "nvmf_tgt_poll_group_000", 00:18:14.225 "listen_address": { 00:18:14.225 "trtype": "TCP", 00:18:14.225 "adrfam": "IPv4", 00:18:14.225 "traddr": "10.0.0.2", 00:18:14.225 "trsvcid": "4420" 00:18:14.225 }, 00:18:14.225 "peer_address": { 00:18:14.225 "trtype": "TCP", 00:18:14.225 "adrfam": "IPv4", 00:18:14.225 "traddr": "10.0.0.1", 00:18:14.225 "trsvcid": "57018" 00:18:14.225 }, 00:18:14.225 "auth": { 00:18:14.225 "state": "completed", 00:18:14.225 "digest": "sha256", 00:18:14.225 "dhgroup": "ffdhe6144" 00:18:14.225 } 00:18:14.225 } 00:18:14.225 ]' 00:18:14.225 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.225 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.225 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.485 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:14.485 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.485 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.485 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.485 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.485 13:48:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZmM1NWYzYzAxZmIxOTIxMDk2YjEzNzEyOGRhYzUwMmFlYjNhOThmNWQ0NGJmMjZmNjQwZDBhYTJjNWUwNzBjZa+XlBc=: 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.425 13:48:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.031 00:18:16.031 13:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.031 13:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.031 13:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.291 13:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.291 13:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.291 13:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.291 13:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.291 13:48:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.291 13:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.291 { 00:18:16.292 "cntlid": 41, 00:18:16.292 "qid": 0, 00:18:16.292 "state": "enabled", 00:18:16.292 "thread": "nvmf_tgt_poll_group_000", 00:18:16.292 "listen_address": { 00:18:16.292 "trtype": "TCP", 00:18:16.292 "adrfam": "IPv4", 00:18:16.292 "traddr": "10.0.0.2", 00:18:16.292 "trsvcid": "4420" 00:18:16.292 }, 00:18:16.292 "peer_address": { 00:18:16.292 "trtype": "TCP", 00:18:16.292 "adrfam": "IPv4", 00:18:16.292 "traddr": "10.0.0.1", 00:18:16.292 "trsvcid": "57040" 00:18:16.292 }, 00:18:16.292 "auth": { 00:18:16.292 "state": "completed", 00:18:16.292 "digest": "sha256", 00:18:16.292 "dhgroup": "ffdhe8192" 00:18:16.292 } 00:18:16.292 } 00:18:16.292 ]' 00:18:16.292 13:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.292 13:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.292 13:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.292 13:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.292 13:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.292 13:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.292 13:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.292 13:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.552 13:48:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDYzODg3ZTRiNTJiYjBmZWZjZDhiNmQ3Y2I1YzEwM2I3MGUzMjVjZWY5MDUyY2IwXjPuNA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YTZiZDRiOTExZWIxMjkxZTZkYmNhNGIxOGJhM2ZmNDIzMGUzOTYwZDlhODZiOTRlODk1NDlhNTRjOGYwMl0Cku4=: 00:18:17.122 13:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.122 13:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.122 13:48:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.122 13:48:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.383 13:48:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.383 13:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.383 13:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:17.383 13:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:17.383 13:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:17.383 13:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.383 13:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:17.383 13:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:17.383 13:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:17.383 13:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.383 13:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.383 13:48:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.383 13:48:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.383 13:48:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.383 13:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.383 13:48:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.954 00:18:17.954 13:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.954 13:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.954 13:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.215 13:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.215 13:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.215 13:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.215 13:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.215 13:48:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.215 13:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.215 { 00:18:18.215 "cntlid": 43, 00:18:18.215 "qid": 0, 00:18:18.215 "state": "enabled", 00:18:18.215 "thread": "nvmf_tgt_poll_group_000", 00:18:18.215 "listen_address": { 00:18:18.215 "trtype": "TCP", 00:18:18.215 "adrfam": "IPv4", 00:18:18.215 "traddr": "10.0.0.2", 00:18:18.215 "trsvcid": "4420" 00:18:18.215 }, 00:18:18.215 "peer_address": { 00:18:18.215 "trtype": "TCP", 00:18:18.215 "adrfam": "IPv4", 00:18:18.215 "traddr": "10.0.0.1", 00:18:18.215 "trsvcid": "57064" 00:18:18.215 }, 00:18:18.215 "auth": { 00:18:18.215 "state": "completed", 00:18:18.215 "digest": "sha256", 00:18:18.215 "dhgroup": "ffdhe8192" 00:18:18.215 } 00:18:18.215 } 00:18:18.215 ]' 00:18:18.215 13:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.215 13:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.215 13:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.215 13:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.215 13:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.215 13:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.215 13:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.215 13:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.476 13:48:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWFmMGViNDE1NTAwYTU3YTEzZTExZDE3NmQ2NGJlMze1D3Ss: --dhchap-ctrl-secret DHHC-1:02:MjFiZDc5OWFkMjUxNTJmZjJmMjg3MGZmMWE1M2U0MTA2MzFmOTA3ZTUyZTFjZWY324H40Q==: 00:18:19.047 13:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.307 13:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.307 13:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.307 13:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.307 13:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.307 13:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.307 13:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:19.307 13:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:19.307 13:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:19.307 13:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.307 13:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.307 13:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:19.307 13:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:19.307 13:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.307 13:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.307 13:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.307 13:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.307 13:48:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.307 13:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.307 13:48:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.877 00:18:19.877 13:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.877 13:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.877 13:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.138 13:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.138 13:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.138 13:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.138 13:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.138 13:48:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.138 13:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.138 { 00:18:20.138 "cntlid": 45, 00:18:20.138 "qid": 0, 00:18:20.138 "state": "enabled", 00:18:20.138 "thread": "nvmf_tgt_poll_group_000", 00:18:20.138 "listen_address": { 00:18:20.138 "trtype": "TCP", 00:18:20.138 "adrfam": "IPv4", 00:18:20.138 "traddr": "10.0.0.2", 00:18:20.138 "trsvcid": "4420" 00:18:20.138 }, 00:18:20.138 "peer_address": { 00:18:20.138 "trtype": "TCP", 00:18:20.138 "adrfam": "IPv4", 00:18:20.138 "traddr": "10.0.0.1", 00:18:20.138 "trsvcid": "57094" 00:18:20.138 }, 00:18:20.138 "auth": { 00:18:20.138 "state": "completed", 00:18:20.138 "digest": "sha256", 00:18:20.138 "dhgroup": "ffdhe8192" 00:18:20.138 } 00:18:20.138 } 00:18:20.138 ]' 00:18:20.138 13:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.138 13:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.138 13:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.138 13:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.138 13:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.138 13:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.138 13:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.138 13:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.398 13:48:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NDVmMGE2YjY2ZmUzMDlkYjIwNzhmZGVlN2EzZDZjMTc1ZmU2NDhiNThmZjlmMmFl/m3xsw==: --dhchap-ctrl-secret DHHC-1:01:NDBmMjAxZjFlZGQ1ZDg4M2ZiYzA2NDY5ODNhMmM0ZDJalygU: 00:18:20.969 13:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.229 13:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.229 13:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.229 13:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.229 13:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.229 13:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.229 13:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:21.229 13:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:21.229 13:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:21.229 13:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.229 13:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:21.229 13:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:21.229 13:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:21.229 13:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.229 13:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:21.229 13:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.229 13:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.229 13:48:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.229 13:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.229 13:48:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.800 00:18:21.800 13:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.800 13:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.800 13:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.061 13:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.061 13:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.061 13:48:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.061 13:48:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.061 13:48:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.061 13:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.061 { 00:18:22.061 "cntlid": 47, 00:18:22.061 "qid": 0, 00:18:22.061 "state": "enabled", 00:18:22.061 "thread": "nvmf_tgt_poll_group_000", 00:18:22.061 "listen_address": { 00:18:22.061 "trtype": "TCP", 00:18:22.061 "adrfam": "IPv4", 00:18:22.061 "traddr": "10.0.0.2", 00:18:22.061 "trsvcid": "4420" 00:18:22.061 }, 00:18:22.061 "peer_address": { 00:18:22.061 "trtype": "TCP", 00:18:22.061 "adrfam": "IPv4", 00:18:22.061 "traddr": "10.0.0.1", 00:18:22.061 "trsvcid": "44492" 00:18:22.061 }, 00:18:22.061 "auth": { 00:18:22.061 "state": "completed", 00:18:22.061 "digest": "sha256", 00:18:22.061 "dhgroup": "ffdhe8192" 00:18:22.061 } 00:18:22.061 } 00:18:22.061 ]' 00:18:22.061 13:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.061 13:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.061 13:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.061 13:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:22.061 13:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.061 13:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.061 13:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.061 13:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.340 13:48:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZmM1NWYzYzAxZmIxOTIxMDk2YjEzNzEyOGRhYzUwMmFlYjNhOThmNWQ0NGJmMjZmNjQwZDBhYTJjNWUwNzBjZa+XlBc=: 00:18:22.911 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.911 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.911 13:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.911 13:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.911 13:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.911 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:22.911 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:22.911 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.911 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:22.911 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:23.172 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:23.172 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.172 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:23.172 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:23.172 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:23.172 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.172 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.172 13:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.172 13:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.172 13:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.172 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.172 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.433 00:18:23.433 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.433 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.433 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.433 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.433 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.433 13:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.433 13:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.433 13:48:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.433 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.433 { 00:18:23.433 "cntlid": 49, 00:18:23.433 "qid": 0, 00:18:23.433 "state": "enabled", 00:18:23.433 "thread": "nvmf_tgt_poll_group_000", 00:18:23.433 "listen_address": { 00:18:23.433 "trtype": "TCP", 00:18:23.433 "adrfam": "IPv4", 00:18:23.433 "traddr": "10.0.0.2", 00:18:23.433 "trsvcid": "4420" 00:18:23.433 }, 00:18:23.433 "peer_address": { 00:18:23.433 "trtype": "TCP", 00:18:23.433 "adrfam": "IPv4", 00:18:23.433 "traddr": "10.0.0.1", 00:18:23.433 "trsvcid": "44522" 00:18:23.433 }, 00:18:23.433 "auth": { 00:18:23.433 "state": "completed", 00:18:23.433 "digest": "sha384", 00:18:23.433 "dhgroup": "null" 00:18:23.433 } 00:18:23.433 } 00:18:23.433 ]' 00:18:23.433 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.693 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.693 13:48:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.693 13:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:23.693 13:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.693 13:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.693 13:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.693 13:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.954 13:48:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDYzODg3ZTRiNTJiYjBmZWZjZDhiNmQ3Y2I1YzEwM2I3MGUzMjVjZWY5MDUyY2IwXjPuNA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YTZiZDRiOTExZWIxMjkxZTZkYmNhNGIxOGJhM2ZmNDIzMGUzOTYwZDlhODZiOTRlODk1NDlhNTRjOGYwMl0Cku4=: 00:18:24.536 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.536 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.536 13:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.536 13:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.536 13:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.536 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.536 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:24.536 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:24.797 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:24.797 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.797 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:24.797 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:24.797 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:24.797 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.797 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.797 13:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.797 13:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.797 13:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.797 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.797 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.057 00:18:25.057 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.057 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.057 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.057 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.317 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.317 13:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.317 13:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.317 13:48:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.317 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.317 { 00:18:25.317 "cntlid": 51, 00:18:25.317 "qid": 0, 00:18:25.317 "state": "enabled", 00:18:25.317 "thread": "nvmf_tgt_poll_group_000", 00:18:25.317 "listen_address": { 00:18:25.317 "trtype": "TCP", 00:18:25.317 "adrfam": "IPv4", 00:18:25.317 "traddr": "10.0.0.2", 00:18:25.317 "trsvcid": "4420" 00:18:25.317 }, 00:18:25.317 "peer_address": { 00:18:25.317 "trtype": "TCP", 00:18:25.317 "adrfam": "IPv4", 00:18:25.317 "traddr": "10.0.0.1", 00:18:25.317 "trsvcid": "44556" 00:18:25.317 }, 00:18:25.317 "auth": { 00:18:25.317 "state": "completed", 00:18:25.317 "digest": "sha384", 00:18:25.317 "dhgroup": "null" 00:18:25.317 } 00:18:25.317 } 00:18:25.317 ]' 00:18:25.317 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.317 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.317 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.317 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:25.317 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.317 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.317 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.317 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.578 13:48:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWFmMGViNDE1NTAwYTU3YTEzZTExZDE3NmQ2NGJlMze1D3Ss: --dhchap-ctrl-secret DHHC-1:02:MjFiZDc5OWFkMjUxNTJmZjJmMjg3MGZmMWE1M2U0MTA2MzFmOTA3ZTUyZTFjZWY324H40Q==: 00:18:26.148 13:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.148 13:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.148 13:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.148 13:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.148 13:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.148 13:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.148 13:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:26.148 13:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:26.408 13:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:26.408 13:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.408 13:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:26.408 13:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:26.408 13:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:26.408 13:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.408 13:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.408 13:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.408 13:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.408 13:48:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.408 13:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.408 13:48:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.668 00:18:26.668 13:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.668 13:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.668 13:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.933 13:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.933 13:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.933 13:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.933 13:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.933 13:48:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.933 13:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.933 { 00:18:26.933 "cntlid": 53, 00:18:26.933 "qid": 0, 00:18:26.933 "state": "enabled", 00:18:26.933 "thread": "nvmf_tgt_poll_group_000", 00:18:26.933 "listen_address": { 00:18:26.933 "trtype": "TCP", 00:18:26.933 "adrfam": "IPv4", 00:18:26.933 "traddr": "10.0.0.2", 00:18:26.933 "trsvcid": "4420" 00:18:26.933 }, 00:18:26.933 "peer_address": { 00:18:26.933 "trtype": "TCP", 00:18:26.933 "adrfam": "IPv4", 00:18:26.933 "traddr": "10.0.0.1", 00:18:26.933 "trsvcid": "44588" 00:18:26.933 }, 00:18:26.933 "auth": { 00:18:26.933 "state": "completed", 00:18:26.933 "digest": "sha384", 00:18:26.933 "dhgroup": "null" 00:18:26.933 } 00:18:26.933 } 00:18:26.933 ]' 00:18:26.933 13:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.933 13:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.933 13:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.933 13:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:26.933 13:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.933 13:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.933 13:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.933 13:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.193 13:48:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NDVmMGE2YjY2ZmUzMDlkYjIwNzhmZGVlN2EzZDZjMTc1ZmU2NDhiNThmZjlmMmFl/m3xsw==: --dhchap-ctrl-secret DHHC-1:01:NDBmMjAxZjFlZGQ1ZDg4M2ZiYzA2NDY5ODNhMmM0ZDJalygU: 00:18:27.764 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.764 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.764 13:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.764 13:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.764 13:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.764 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.764 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:27.764 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:28.024 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:28.024 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.024 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:28.024 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:28.024 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:28.024 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.024 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:28.024 13:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.024 13:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.024 13:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.025 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.025 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.285 00:18:28.285 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.285 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.285 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.285 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.285 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.285 13:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.285 13:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.285 13:48:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.285 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.285 { 00:18:28.285 "cntlid": 55, 00:18:28.285 "qid": 0, 00:18:28.285 "state": "enabled", 00:18:28.285 "thread": "nvmf_tgt_poll_group_000", 00:18:28.285 "listen_address": { 00:18:28.285 "trtype": "TCP", 00:18:28.285 "adrfam": "IPv4", 00:18:28.285 "traddr": "10.0.0.2", 00:18:28.285 "trsvcid": "4420" 00:18:28.285 }, 00:18:28.285 "peer_address": { 00:18:28.285 "trtype": "TCP", 00:18:28.285 "adrfam": "IPv4", 00:18:28.285 "traddr": "10.0.0.1", 00:18:28.285 "trsvcid": "44628" 00:18:28.285 }, 00:18:28.285 "auth": { 00:18:28.285 "state": "completed", 00:18:28.285 "digest": "sha384", 00:18:28.285 "dhgroup": "null" 00:18:28.285 } 00:18:28.285 } 00:18:28.285 ]' 00:18:28.285 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.545 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.545 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.545 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:28.545 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.545 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.545 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.545 13:48:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.805 13:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZmM1NWYzYzAxZmIxOTIxMDk2YjEzNzEyOGRhYzUwMmFlYjNhOThmNWQ0NGJmMjZmNjQwZDBhYTJjNWUwNzBjZa+XlBc=: 00:18:29.375 13:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.375 13:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.375 13:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.375 13:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.375 13:48:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.375 13:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.375 13:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.375 13:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:29.375 13:48:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:29.635 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:29.635 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.635 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:29.635 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:29.635 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:29.635 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.635 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.635 13:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.635 13:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.635 13:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.635 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.635 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.895 00:18:29.895 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.895 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.895 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.895 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.895 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.895 13:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.895 13:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.175 13:48:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.175 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.175 { 00:18:30.175 "cntlid": 57, 00:18:30.175 "qid": 0, 00:18:30.175 "state": "enabled", 00:18:30.175 "thread": "nvmf_tgt_poll_group_000", 00:18:30.175 "listen_address": { 00:18:30.175 "trtype": "TCP", 00:18:30.175 "adrfam": "IPv4", 00:18:30.175 "traddr": "10.0.0.2", 00:18:30.175 "trsvcid": "4420" 00:18:30.175 }, 00:18:30.175 "peer_address": { 00:18:30.175 "trtype": "TCP", 00:18:30.175 "adrfam": "IPv4", 00:18:30.175 "traddr": "10.0.0.1", 00:18:30.175 "trsvcid": "44652" 00:18:30.175 }, 00:18:30.175 "auth": { 00:18:30.175 "state": "completed", 00:18:30.175 "digest": "sha384", 00:18:30.175 "dhgroup": "ffdhe2048" 00:18:30.175 } 00:18:30.175 } 00:18:30.175 ]' 00:18:30.175 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.175 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.175 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.175 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:30.175 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.175 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.175 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.175 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.454 13:48:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDYzODg3ZTRiNTJiYjBmZWZjZDhiNmQ3Y2I1YzEwM2I3MGUzMjVjZWY5MDUyY2IwXjPuNA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YTZiZDRiOTExZWIxMjkxZTZkYmNhNGIxOGJhM2ZmNDIzMGUzOTYwZDlhODZiOTRlODk1NDlhNTRjOGYwMl0Cku4=: 00:18:31.039 13:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.039 13:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.039 13:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.039 13:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.039 13:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.039 13:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.039 13:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:31.039 13:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:31.301 13:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:31.301 13:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.301 13:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.301 13:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:31.301 13:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:31.301 13:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.301 13:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.301 13:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.301 13:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.301 13:48:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.301 13:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.301 13:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.562 00:18:31.562 13:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.562 13:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.562 13:48:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.562 13:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.562 13:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.562 13:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.562 13:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.562 13:48:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.562 13:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.562 { 00:18:31.562 "cntlid": 59, 00:18:31.562 "qid": 0, 00:18:31.562 "state": "enabled", 00:18:31.562 "thread": "nvmf_tgt_poll_group_000", 00:18:31.562 "listen_address": { 00:18:31.562 "trtype": "TCP", 00:18:31.562 "adrfam": "IPv4", 00:18:31.562 "traddr": "10.0.0.2", 00:18:31.562 "trsvcid": "4420" 00:18:31.562 }, 00:18:31.562 "peer_address": { 00:18:31.562 "trtype": "TCP", 00:18:31.562 "adrfam": "IPv4", 00:18:31.562 "traddr": "10.0.0.1", 00:18:31.562 "trsvcid": "47286" 00:18:31.562 }, 00:18:31.562 "auth": { 00:18:31.562 "state": "completed", 00:18:31.562 "digest": "sha384", 00:18:31.562 "dhgroup": "ffdhe2048" 00:18:31.562 } 00:18:31.562 } 00:18:31.562 ]' 00:18:31.823 13:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.823 13:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.823 13:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.823 13:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:31.823 13:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.823 13:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.823 13:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.823 13:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.083 13:48:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWFmMGViNDE1NTAwYTU3YTEzZTExZDE3NmQ2NGJlMze1D3Ss: --dhchap-ctrl-secret DHHC-1:02:MjFiZDc5OWFkMjUxNTJmZjJmMjg3MGZmMWE1M2U0MTA2MzFmOTA3ZTUyZTFjZWY324H40Q==: 00:18:32.654 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.654 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.654 13:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.654 13:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.654 13:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.654 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.654 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:32.654 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:32.914 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:32.914 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.914 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:32.914 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:32.914 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:32.914 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.914 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.914 13:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.914 13:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.914 13:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.914 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.914 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.174 00:18:33.174 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.174 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.174 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.435 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.435 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.435 13:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.435 13:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.435 13:48:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.435 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.435 { 00:18:33.435 "cntlid": 61, 00:18:33.435 "qid": 0, 00:18:33.435 "state": "enabled", 00:18:33.435 "thread": "nvmf_tgt_poll_group_000", 00:18:33.435 "listen_address": { 00:18:33.435 "trtype": "TCP", 00:18:33.435 "adrfam": "IPv4", 00:18:33.435 "traddr": "10.0.0.2", 00:18:33.435 "trsvcid": "4420" 00:18:33.435 }, 00:18:33.435 "peer_address": { 00:18:33.435 "trtype": "TCP", 00:18:33.435 "adrfam": "IPv4", 00:18:33.435 "traddr": "10.0.0.1", 00:18:33.435 "trsvcid": "47322" 00:18:33.435 }, 00:18:33.435 "auth": { 00:18:33.435 "state": "completed", 00:18:33.435 "digest": "sha384", 00:18:33.435 "dhgroup": "ffdhe2048" 00:18:33.435 } 00:18:33.435 } 00:18:33.435 ]' 00:18:33.435 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.435 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.435 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.435 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:33.435 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.435 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.435 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.435 13:48:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.696 13:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NDVmMGE2YjY2ZmUzMDlkYjIwNzhmZGVlN2EzZDZjMTc1ZmU2NDhiNThmZjlmMmFl/m3xsw==: --dhchap-ctrl-secret DHHC-1:01:NDBmMjAxZjFlZGQ1ZDg4M2ZiYzA2NDY5ODNhMmM0ZDJalygU: 00:18:34.266 13:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.266 13:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.266 13:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.266 13:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.266 13:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.266 13:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.266 13:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:34.266 13:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:34.527 13:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:34.527 13:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.527 13:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.527 13:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:34.527 13:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:34.527 13:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.527 13:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:34.527 13:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.527 13:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.527 13:49:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.527 13:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.527 13:49:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.787 00:18:34.787 13:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.787 13:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.787 13:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.047 13:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.047 13:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.047 13:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.047 13:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.047 13:49:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.047 13:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.047 { 00:18:35.047 "cntlid": 63, 00:18:35.047 "qid": 0, 00:18:35.047 "state": "enabled", 00:18:35.047 "thread": "nvmf_tgt_poll_group_000", 00:18:35.047 "listen_address": { 00:18:35.047 "trtype": "TCP", 00:18:35.047 "adrfam": "IPv4", 00:18:35.047 "traddr": "10.0.0.2", 00:18:35.047 "trsvcid": "4420" 00:18:35.047 }, 00:18:35.047 "peer_address": { 00:18:35.047 "trtype": "TCP", 00:18:35.047 "adrfam": "IPv4", 00:18:35.047 "traddr": "10.0.0.1", 00:18:35.047 "trsvcid": "47348" 00:18:35.047 }, 00:18:35.047 "auth": { 00:18:35.047 "state": "completed", 00:18:35.047 "digest": "sha384", 00:18:35.047 "dhgroup": "ffdhe2048" 00:18:35.047 } 00:18:35.047 } 00:18:35.047 ]' 00:18:35.047 13:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.047 13:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.047 13:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.047 13:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:35.047 13:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.047 13:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.047 13:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.047 13:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.308 13:49:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZmM1NWYzYzAxZmIxOTIxMDk2YjEzNzEyOGRhYzUwMmFlYjNhOThmNWQ0NGJmMjZmNjQwZDBhYTJjNWUwNzBjZa+XlBc=: 00:18:35.882 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.882 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.882 13:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.143 13:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.143 13:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.143 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:36.143 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.143 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:36.143 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:36.143 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:36.143 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.143 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.143 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:36.143 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:36.143 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.143 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.143 13:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.143 13:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.143 13:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.143 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.143 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.403 00:18:36.403 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.403 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.403 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.664 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.664 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.664 13:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.664 13:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.664 13:49:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.664 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.664 { 00:18:36.664 "cntlid": 65, 00:18:36.664 "qid": 0, 00:18:36.664 "state": "enabled", 00:18:36.664 "thread": "nvmf_tgt_poll_group_000", 00:18:36.664 "listen_address": { 00:18:36.664 "trtype": "TCP", 00:18:36.664 "adrfam": "IPv4", 00:18:36.664 "traddr": "10.0.0.2", 00:18:36.664 "trsvcid": "4420" 00:18:36.664 }, 00:18:36.664 "peer_address": { 00:18:36.664 "trtype": "TCP", 00:18:36.664 "adrfam": "IPv4", 00:18:36.664 "traddr": "10.0.0.1", 00:18:36.664 "trsvcid": "47374" 00:18:36.664 }, 00:18:36.664 "auth": { 00:18:36.664 "state": "completed", 00:18:36.664 "digest": "sha384", 00:18:36.664 "dhgroup": "ffdhe3072" 00:18:36.664 } 00:18:36.664 } 00:18:36.664 ]' 00:18:36.664 13:49:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.664 13:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.664 13:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.664 13:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:36.664 13:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.664 13:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.664 13:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.664 13:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.924 13:49:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDYzODg3ZTRiNTJiYjBmZWZjZDhiNmQ3Y2I1YzEwM2I3MGUzMjVjZWY5MDUyY2IwXjPuNA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YTZiZDRiOTExZWIxMjkxZTZkYmNhNGIxOGJhM2ZmNDIzMGUzOTYwZDlhODZiOTRlODk1NDlhNTRjOGYwMl0Cku4=: 00:18:37.496 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.757 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.757 13:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.757 13:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.757 13:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.757 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.757 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:37.757 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:37.757 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:37.757 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.757 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:37.757 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:37.757 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:37.757 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.757 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.757 13:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.757 13:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.757 13:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.757 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.757 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.018 00:18:38.018 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.018 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.018 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.278 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.278 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.278 13:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.278 13:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.278 13:49:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.278 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.278 { 00:18:38.278 "cntlid": 67, 00:18:38.278 "qid": 0, 00:18:38.278 "state": "enabled", 00:18:38.278 "thread": "nvmf_tgt_poll_group_000", 00:18:38.278 "listen_address": { 00:18:38.278 "trtype": "TCP", 00:18:38.278 "adrfam": "IPv4", 00:18:38.278 "traddr": "10.0.0.2", 00:18:38.278 "trsvcid": "4420" 00:18:38.278 }, 00:18:38.278 "peer_address": { 00:18:38.278 "trtype": "TCP", 00:18:38.278 "adrfam": "IPv4", 00:18:38.278 "traddr": "10.0.0.1", 00:18:38.278 "trsvcid": "47402" 00:18:38.278 }, 00:18:38.278 "auth": { 00:18:38.278 "state": "completed", 00:18:38.278 "digest": "sha384", 00:18:38.278 "dhgroup": "ffdhe3072" 00:18:38.278 } 00:18:38.278 } 00:18:38.278 ]' 00:18:38.278 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.278 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.278 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.278 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:38.278 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.278 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.278 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.278 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.540 13:49:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWFmMGViNDE1NTAwYTU3YTEzZTExZDE3NmQ2NGJlMze1D3Ss: --dhchap-ctrl-secret DHHC-1:02:MjFiZDc5OWFkMjUxNTJmZjJmMjg3MGZmMWE1M2U0MTA2MzFmOTA3ZTUyZTFjZWY324H40Q==: 00:18:39.482 13:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.482 13:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.482 13:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.482 13:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.482 13:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.482 13:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.482 13:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:39.482 13:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:39.482 13:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:39.482 13:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.482 13:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:39.482 13:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:39.483 13:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:39.483 13:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.483 13:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.483 13:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.483 13:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.483 13:49:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.483 13:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.483 13:49:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.744 00:18:39.744 13:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.744 13:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.744 13:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.744 13:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.744 13:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.744 13:49:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.744 13:49:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.004 13:49:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.004 13:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.004 { 00:18:40.004 "cntlid": 69, 00:18:40.004 "qid": 0, 00:18:40.004 "state": "enabled", 00:18:40.004 "thread": "nvmf_tgt_poll_group_000", 00:18:40.004 "listen_address": { 00:18:40.004 "trtype": "TCP", 00:18:40.004 "adrfam": "IPv4", 00:18:40.004 "traddr": "10.0.0.2", 00:18:40.004 "trsvcid": "4420" 00:18:40.004 }, 00:18:40.004 "peer_address": { 00:18:40.004 "trtype": "TCP", 00:18:40.004 "adrfam": "IPv4", 00:18:40.004 "traddr": "10.0.0.1", 00:18:40.004 "trsvcid": "47430" 00:18:40.004 }, 00:18:40.004 "auth": { 00:18:40.004 "state": "completed", 00:18:40.004 "digest": "sha384", 00:18:40.004 "dhgroup": "ffdhe3072" 00:18:40.004 } 00:18:40.004 } 00:18:40.004 ]' 00:18:40.004 13:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.004 13:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.004 13:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.004 13:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:40.004 13:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.004 13:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.004 13:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.004 13:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.265 13:49:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NDVmMGE2YjY2ZmUzMDlkYjIwNzhmZGVlN2EzZDZjMTc1ZmU2NDhiNThmZjlmMmFl/m3xsw==: --dhchap-ctrl-secret DHHC-1:01:NDBmMjAxZjFlZGQ1ZDg4M2ZiYzA2NDY5ODNhMmM0ZDJalygU: 00:18:40.834 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.834 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.834 13:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.834 13:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.834 13:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.834 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.835 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:40.835 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:41.127 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:41.127 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.127 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:41.127 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:41.127 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:41.127 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.127 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:41.127 13:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.127 13:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.127 13:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.127 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.127 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.388 00:18:41.388 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.388 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.388 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.648 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.648 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.648 13:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.648 13:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.648 13:49:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.648 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.648 { 00:18:41.648 "cntlid": 71, 00:18:41.648 "qid": 0, 00:18:41.648 "state": "enabled", 00:18:41.648 "thread": "nvmf_tgt_poll_group_000", 00:18:41.648 "listen_address": { 00:18:41.648 "trtype": "TCP", 00:18:41.648 "adrfam": "IPv4", 00:18:41.648 "traddr": "10.0.0.2", 00:18:41.648 "trsvcid": "4420" 00:18:41.648 }, 00:18:41.648 "peer_address": { 00:18:41.648 "trtype": "TCP", 00:18:41.648 "adrfam": "IPv4", 00:18:41.648 "traddr": "10.0.0.1", 00:18:41.648 "trsvcid": "38144" 00:18:41.648 }, 00:18:41.648 "auth": { 00:18:41.648 "state": "completed", 00:18:41.648 "digest": "sha384", 00:18:41.648 "dhgroup": "ffdhe3072" 00:18:41.648 } 00:18:41.648 } 00:18:41.648 ]' 00:18:41.648 13:49:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.648 13:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.648 13:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.648 13:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:41.648 13:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.648 13:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.648 13:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.648 13:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.908 13:49:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZmM1NWYzYzAxZmIxOTIxMDk2YjEzNzEyOGRhYzUwMmFlYjNhOThmNWQ0NGJmMjZmNjQwZDBhYTJjNWUwNzBjZa+XlBc=: 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.850 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.110 00:18:43.110 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.110 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.110 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.371 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.371 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.371 13:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.371 13:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.371 13:49:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.371 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.371 { 00:18:43.371 "cntlid": 73, 00:18:43.371 "qid": 0, 00:18:43.371 "state": "enabled", 00:18:43.371 "thread": "nvmf_tgt_poll_group_000", 00:18:43.371 "listen_address": { 00:18:43.371 "trtype": "TCP", 00:18:43.371 "adrfam": "IPv4", 00:18:43.371 "traddr": "10.0.0.2", 00:18:43.371 "trsvcid": "4420" 00:18:43.371 }, 00:18:43.371 "peer_address": { 00:18:43.371 "trtype": "TCP", 00:18:43.371 "adrfam": "IPv4", 00:18:43.371 "traddr": "10.0.0.1", 00:18:43.371 "trsvcid": "38162" 00:18:43.371 }, 00:18:43.371 "auth": { 00:18:43.371 "state": "completed", 00:18:43.371 "digest": "sha384", 00:18:43.371 "dhgroup": "ffdhe4096" 00:18:43.371 } 00:18:43.371 } 00:18:43.371 ]' 00:18:43.371 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.371 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.371 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.371 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:43.371 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.371 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.371 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.371 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.631 13:49:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDYzODg3ZTRiNTJiYjBmZWZjZDhiNmQ3Y2I1YzEwM2I3MGUzMjVjZWY5MDUyY2IwXjPuNA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YTZiZDRiOTExZWIxMjkxZTZkYmNhNGIxOGJhM2ZmNDIzMGUzOTYwZDlhODZiOTRlODk1NDlhNTRjOGYwMl0Cku4=: 00:18:44.201 13:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.461 13:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.461 13:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.461 13:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.461 13:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.461 13:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.461 13:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:44.461 13:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:44.461 13:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:44.461 13:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.461 13:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:44.461 13:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:44.461 13:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:44.461 13:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.461 13:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.461 13:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.461 13:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.461 13:49:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.461 13:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.461 13:49:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.737 00:18:44.737 13:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.737 13:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.737 13:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.035 13:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.035 13:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.035 13:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.035 13:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.035 13:49:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.035 13:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.035 { 00:18:45.035 "cntlid": 75, 00:18:45.035 "qid": 0, 00:18:45.035 "state": "enabled", 00:18:45.035 "thread": "nvmf_tgt_poll_group_000", 00:18:45.035 "listen_address": { 00:18:45.035 "trtype": "TCP", 00:18:45.035 "adrfam": "IPv4", 00:18:45.035 "traddr": "10.0.0.2", 00:18:45.035 "trsvcid": "4420" 00:18:45.035 }, 00:18:45.035 "peer_address": { 00:18:45.035 "trtype": "TCP", 00:18:45.035 "adrfam": "IPv4", 00:18:45.035 "traddr": "10.0.0.1", 00:18:45.035 "trsvcid": "38194" 00:18:45.035 }, 00:18:45.035 "auth": { 00:18:45.035 "state": "completed", 00:18:45.035 "digest": "sha384", 00:18:45.035 "dhgroup": "ffdhe4096" 00:18:45.035 } 00:18:45.035 } 00:18:45.035 ]' 00:18:45.035 13:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.035 13:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.035 13:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.035 13:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:45.035 13:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.035 13:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.035 13:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.035 13:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.304 13:49:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWFmMGViNDE1NTAwYTU3YTEzZTExZDE3NmQ2NGJlMze1D3Ss: --dhchap-ctrl-secret DHHC-1:02:MjFiZDc5OWFkMjUxNTJmZjJmMjg3MGZmMWE1M2U0MTA2MzFmOTA3ZTUyZTFjZWY324H40Q==: 00:18:45.874 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.874 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.874 13:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.874 13:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.874 13:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.874 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.874 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.874 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:46.135 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:46.135 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.135 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.135 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:46.135 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:46.135 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.135 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.135 13:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.135 13:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.135 13:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.135 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.135 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.395 00:18:46.395 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.395 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.395 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.656 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.656 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.656 13:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.656 13:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.656 13:49:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.656 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.656 { 00:18:46.656 "cntlid": 77, 00:18:46.656 "qid": 0, 00:18:46.656 "state": "enabled", 00:18:46.656 "thread": "nvmf_tgt_poll_group_000", 00:18:46.656 "listen_address": { 00:18:46.656 "trtype": "TCP", 00:18:46.656 "adrfam": "IPv4", 00:18:46.656 "traddr": "10.0.0.2", 00:18:46.656 "trsvcid": "4420" 00:18:46.656 }, 00:18:46.656 "peer_address": { 00:18:46.656 "trtype": "TCP", 00:18:46.656 "adrfam": "IPv4", 00:18:46.656 "traddr": "10.0.0.1", 00:18:46.656 "trsvcid": "38214" 00:18:46.656 }, 00:18:46.656 "auth": { 00:18:46.656 "state": "completed", 00:18:46.656 "digest": "sha384", 00:18:46.656 "dhgroup": "ffdhe4096" 00:18:46.656 } 00:18:46.656 } 00:18:46.656 ]' 00:18:46.656 13:49:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.656 13:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.656 13:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.656 13:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:46.656 13:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.656 13:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.656 13:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.656 13:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.916 13:49:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NDVmMGE2YjY2ZmUzMDlkYjIwNzhmZGVlN2EzZDZjMTc1ZmU2NDhiNThmZjlmMmFl/m3xsw==: --dhchap-ctrl-secret DHHC-1:01:NDBmMjAxZjFlZGQ1ZDg4M2ZiYzA2NDY5ODNhMmM0ZDJalygU: 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.857 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.117 00:18:48.117 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.117 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.117 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.377 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.377 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.377 13:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.377 13:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.377 13:49:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.377 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.377 { 00:18:48.377 "cntlid": 79, 00:18:48.377 "qid": 0, 00:18:48.377 "state": "enabled", 00:18:48.377 "thread": "nvmf_tgt_poll_group_000", 00:18:48.377 "listen_address": { 00:18:48.377 "trtype": "TCP", 00:18:48.377 "adrfam": "IPv4", 00:18:48.377 "traddr": "10.0.0.2", 00:18:48.377 "trsvcid": "4420" 00:18:48.377 }, 00:18:48.377 "peer_address": { 00:18:48.377 "trtype": "TCP", 00:18:48.377 "adrfam": "IPv4", 00:18:48.377 "traddr": "10.0.0.1", 00:18:48.377 "trsvcid": "38250" 00:18:48.377 }, 00:18:48.377 "auth": { 00:18:48.377 "state": "completed", 00:18:48.377 "digest": "sha384", 00:18:48.377 "dhgroup": "ffdhe4096" 00:18:48.377 } 00:18:48.377 } 00:18:48.377 ]' 00:18:48.377 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.377 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.377 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.377 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:48.377 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.377 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.377 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.377 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.637 13:49:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZmM1NWYzYzAxZmIxOTIxMDk2YjEzNzEyOGRhYzUwMmFlYjNhOThmNWQ0NGJmMjZmNjQwZDBhYTJjNWUwNzBjZa+XlBc=: 00:18:49.210 13:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.210 13:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.210 13:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.210 13:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.471 13:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.471 13:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.471 13:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.471 13:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:49.471 13:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:49.471 13:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:49.471 13:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.471 13:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.471 13:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:49.471 13:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:49.471 13:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.471 13:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.471 13:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.471 13:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.471 13:49:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.471 13:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.471 13:49:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.755 00:18:50.016 13:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.016 13:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.016 13:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.016 13:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.016 13:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.016 13:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.016 13:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.016 13:49:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.016 13:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.016 { 00:18:50.016 "cntlid": 81, 00:18:50.016 "qid": 0, 00:18:50.016 "state": "enabled", 00:18:50.016 "thread": "nvmf_tgt_poll_group_000", 00:18:50.016 "listen_address": { 00:18:50.016 "trtype": "TCP", 00:18:50.016 "adrfam": "IPv4", 00:18:50.016 "traddr": "10.0.0.2", 00:18:50.016 "trsvcid": "4420" 00:18:50.016 }, 00:18:50.016 "peer_address": { 00:18:50.016 "trtype": "TCP", 00:18:50.016 "adrfam": "IPv4", 00:18:50.016 "traddr": "10.0.0.1", 00:18:50.016 "trsvcid": "38272" 00:18:50.016 }, 00:18:50.016 "auth": { 00:18:50.016 "state": "completed", 00:18:50.016 "digest": "sha384", 00:18:50.016 "dhgroup": "ffdhe6144" 00:18:50.016 } 00:18:50.016 } 00:18:50.016 ]' 00:18:50.016 13:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.016 13:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.016 13:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.277 13:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:50.277 13:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.277 13:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.277 13:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.277 13:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.277 13:49:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDYzODg3ZTRiNTJiYjBmZWZjZDhiNmQ3Y2I1YzEwM2I3MGUzMjVjZWY5MDUyY2IwXjPuNA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YTZiZDRiOTExZWIxMjkxZTZkYmNhNGIxOGJhM2ZmNDIzMGUzOTYwZDlhODZiOTRlODk1NDlhNTRjOGYwMl0Cku4=: 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.219 13:49:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.480 00:18:51.480 13:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.480 13:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.480 13:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.742 13:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.742 13:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.742 13:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.742 13:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.742 13:49:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.742 13:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.742 { 00:18:51.742 "cntlid": 83, 00:18:51.742 "qid": 0, 00:18:51.742 "state": "enabled", 00:18:51.742 "thread": "nvmf_tgt_poll_group_000", 00:18:51.742 "listen_address": { 00:18:51.742 "trtype": "TCP", 00:18:51.742 "adrfam": "IPv4", 00:18:51.742 "traddr": "10.0.0.2", 00:18:51.742 "trsvcid": "4420" 00:18:51.742 }, 00:18:51.742 "peer_address": { 00:18:51.742 "trtype": "TCP", 00:18:51.742 "adrfam": "IPv4", 00:18:51.742 "traddr": "10.0.0.1", 00:18:51.742 "trsvcid": "37878" 00:18:51.742 }, 00:18:51.742 "auth": { 00:18:51.742 "state": "completed", 00:18:51.742 "digest": "sha384", 00:18:51.742 "dhgroup": "ffdhe6144" 00:18:51.742 } 00:18:51.742 } 00:18:51.742 ]' 00:18:51.742 13:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.742 13:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.742 13:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.003 13:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:52.003 13:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.003 13:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.003 13:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.003 13:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.003 13:49:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWFmMGViNDE1NTAwYTU3YTEzZTExZDE3NmQ2NGJlMze1D3Ss: --dhchap-ctrl-secret DHHC-1:02:MjFiZDc5OWFkMjUxNTJmZjJmMjg3MGZmMWE1M2U0MTA2MzFmOTA3ZTUyZTFjZWY324H40Q==: 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.944 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.204 00:18:53.204 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.204 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.204 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.464 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.464 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.464 13:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.464 13:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.464 13:49:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.464 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.464 { 00:18:53.464 "cntlid": 85, 00:18:53.464 "qid": 0, 00:18:53.464 "state": "enabled", 00:18:53.464 "thread": "nvmf_tgt_poll_group_000", 00:18:53.464 "listen_address": { 00:18:53.464 "trtype": "TCP", 00:18:53.464 "adrfam": "IPv4", 00:18:53.464 "traddr": "10.0.0.2", 00:18:53.464 "trsvcid": "4420" 00:18:53.464 }, 00:18:53.464 "peer_address": { 00:18:53.464 "trtype": "TCP", 00:18:53.464 "adrfam": "IPv4", 00:18:53.464 "traddr": "10.0.0.1", 00:18:53.464 "trsvcid": "37900" 00:18:53.464 }, 00:18:53.464 "auth": { 00:18:53.464 "state": "completed", 00:18:53.464 "digest": "sha384", 00:18:53.464 "dhgroup": "ffdhe6144" 00:18:53.464 } 00:18:53.464 } 00:18:53.464 ]' 00:18:53.464 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.464 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.464 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.464 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:53.464 13:49:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.725 13:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.725 13:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.725 13:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.725 13:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NDVmMGE2YjY2ZmUzMDlkYjIwNzhmZGVlN2EzZDZjMTc1ZmU2NDhiNThmZjlmMmFl/m3xsw==: --dhchap-ctrl-secret DHHC-1:01:NDBmMjAxZjFlZGQ1ZDg4M2ZiYzA2NDY5ODNhMmM0ZDJalygU: 00:18:54.666 13:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.666 13:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.666 13:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.666 13:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.666 13:49:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.666 13:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.666 13:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.666 13:49:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.666 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:54.666 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.666 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:54.666 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:54.666 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:54.666 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.666 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:54.666 13:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.666 13:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.666 13:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.666 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.666 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.927 00:18:55.188 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.188 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.188 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.188 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.188 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.188 13:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.188 13:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.188 13:49:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.188 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.188 { 00:18:55.188 "cntlid": 87, 00:18:55.188 "qid": 0, 00:18:55.188 "state": "enabled", 00:18:55.188 "thread": "nvmf_tgt_poll_group_000", 00:18:55.188 "listen_address": { 00:18:55.188 "trtype": "TCP", 00:18:55.188 "adrfam": "IPv4", 00:18:55.188 "traddr": "10.0.0.2", 00:18:55.188 "trsvcid": "4420" 00:18:55.188 }, 00:18:55.188 "peer_address": { 00:18:55.188 "trtype": "TCP", 00:18:55.188 "adrfam": "IPv4", 00:18:55.188 "traddr": "10.0.0.1", 00:18:55.188 "trsvcid": "37932" 00:18:55.188 }, 00:18:55.188 "auth": { 00:18:55.188 "state": "completed", 00:18:55.188 "digest": "sha384", 00:18:55.188 "dhgroup": "ffdhe6144" 00:18:55.188 } 00:18:55.188 } 00:18:55.188 ]' 00:18:55.188 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.188 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.188 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.448 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:55.448 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.448 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.448 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.448 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.448 13:49:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZmM1NWYzYzAxZmIxOTIxMDk2YjEzNzEyOGRhYzUwMmFlYjNhOThmNWQ0NGJmMjZmNjQwZDBhYTJjNWUwNzBjZa+XlBc=: 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.389 13:49:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.958 00:18:56.958 13:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.958 13:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.958 13:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.219 13:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.219 13:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.219 13:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.219 13:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.219 13:49:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.219 13:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.219 { 00:18:57.219 "cntlid": 89, 00:18:57.219 "qid": 0, 00:18:57.219 "state": "enabled", 00:18:57.219 "thread": "nvmf_tgt_poll_group_000", 00:18:57.219 "listen_address": { 00:18:57.219 "trtype": "TCP", 00:18:57.219 "adrfam": "IPv4", 00:18:57.219 "traddr": "10.0.0.2", 00:18:57.219 "trsvcid": "4420" 00:18:57.219 }, 00:18:57.219 "peer_address": { 00:18:57.219 "trtype": "TCP", 00:18:57.219 "adrfam": "IPv4", 00:18:57.219 "traddr": "10.0.0.1", 00:18:57.219 "trsvcid": "37958" 00:18:57.219 }, 00:18:57.219 "auth": { 00:18:57.219 "state": "completed", 00:18:57.219 "digest": "sha384", 00:18:57.219 "dhgroup": "ffdhe8192" 00:18:57.219 } 00:18:57.219 } 00:18:57.219 ]' 00:18:57.219 13:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.219 13:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.219 13:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.219 13:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:57.219 13:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.219 13:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.219 13:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.219 13:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.480 13:49:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDYzODg3ZTRiNTJiYjBmZWZjZDhiNmQ3Y2I1YzEwM2I3MGUzMjVjZWY5MDUyY2IwXjPuNA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YTZiZDRiOTExZWIxMjkxZTZkYmNhNGIxOGJhM2ZmNDIzMGUzOTYwZDlhODZiOTRlODk1NDlhNTRjOGYwMl0Cku4=: 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.429 13:49:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.000 00:18:59.000 13:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.000 13:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.000 13:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.000 13:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.000 13:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.000 13:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.000 13:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.000 13:49:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.000 13:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.000 { 00:18:59.000 "cntlid": 91, 00:18:59.000 "qid": 0, 00:18:59.000 "state": "enabled", 00:18:59.000 "thread": "nvmf_tgt_poll_group_000", 00:18:59.000 "listen_address": { 00:18:59.000 "trtype": "TCP", 00:18:59.000 "adrfam": "IPv4", 00:18:59.000 "traddr": "10.0.0.2", 00:18:59.000 "trsvcid": "4420" 00:18:59.000 }, 00:18:59.000 "peer_address": { 00:18:59.000 "trtype": "TCP", 00:18:59.000 "adrfam": "IPv4", 00:18:59.000 "traddr": "10.0.0.1", 00:18:59.000 "trsvcid": "37982" 00:18:59.000 }, 00:18:59.000 "auth": { 00:18:59.000 "state": "completed", 00:18:59.000 "digest": "sha384", 00:18:59.000 "dhgroup": "ffdhe8192" 00:18:59.000 } 00:18:59.000 } 00:18:59.000 ]' 00:18:59.000 13:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.260 13:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.260 13:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.260 13:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:59.260 13:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.260 13:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.260 13:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.260 13:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.260 13:49:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWFmMGViNDE1NTAwYTU3YTEzZTExZDE3NmQ2NGJlMze1D3Ss: --dhchap-ctrl-secret DHHC-1:02:MjFiZDc5OWFkMjUxNTJmZjJmMjg3MGZmMWE1M2U0MTA2MzFmOTA3ZTUyZTFjZWY324H40Q==: 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.206 13:49:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.776 00:19:00.776 13:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.776 13:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.776 13:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.038 13:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.038 13:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.038 13:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.038 13:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.038 13:49:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.038 13:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.038 { 00:19:01.038 "cntlid": 93, 00:19:01.038 "qid": 0, 00:19:01.038 "state": "enabled", 00:19:01.038 "thread": "nvmf_tgt_poll_group_000", 00:19:01.038 "listen_address": { 00:19:01.038 "trtype": "TCP", 00:19:01.038 "adrfam": "IPv4", 00:19:01.038 "traddr": "10.0.0.2", 00:19:01.038 "trsvcid": "4420" 00:19:01.038 }, 00:19:01.038 "peer_address": { 00:19:01.038 "trtype": "TCP", 00:19:01.038 "adrfam": "IPv4", 00:19:01.038 "traddr": "10.0.0.1", 00:19:01.038 "trsvcid": "37998" 00:19:01.038 }, 00:19:01.038 "auth": { 00:19:01.038 "state": "completed", 00:19:01.038 "digest": "sha384", 00:19:01.038 "dhgroup": "ffdhe8192" 00:19:01.038 } 00:19:01.038 } 00:19:01.038 ]' 00:19:01.038 13:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.038 13:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.038 13:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.038 13:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:01.038 13:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.299 13:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.299 13:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.299 13:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.299 13:49:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NDVmMGE2YjY2ZmUzMDlkYjIwNzhmZGVlN2EzZDZjMTc1ZmU2NDhiNThmZjlmMmFl/m3xsw==: --dhchap-ctrl-secret DHHC-1:01:NDBmMjAxZjFlZGQ1ZDg4M2ZiYzA2NDY5ODNhMmM0ZDJalygU: 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.266 13:49:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.837 00:19:02.837 13:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.837 13:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.837 13:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.837 13:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.837 13:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.837 13:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.837 13:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.098 13:49:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.098 13:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.098 { 00:19:03.098 "cntlid": 95, 00:19:03.098 "qid": 0, 00:19:03.098 "state": "enabled", 00:19:03.098 "thread": "nvmf_tgt_poll_group_000", 00:19:03.098 "listen_address": { 00:19:03.098 "trtype": "TCP", 00:19:03.098 "adrfam": "IPv4", 00:19:03.098 "traddr": "10.0.0.2", 00:19:03.098 "trsvcid": "4420" 00:19:03.098 }, 00:19:03.098 "peer_address": { 00:19:03.098 "trtype": "TCP", 00:19:03.098 "adrfam": "IPv4", 00:19:03.098 "traddr": "10.0.0.1", 00:19:03.098 "trsvcid": "48348" 00:19:03.098 }, 00:19:03.098 "auth": { 00:19:03.098 "state": "completed", 00:19:03.098 "digest": "sha384", 00:19:03.098 "dhgroup": "ffdhe8192" 00:19:03.098 } 00:19:03.098 } 00:19:03.098 ]' 00:19:03.098 13:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.098 13:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.098 13:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.098 13:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:03.098 13:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.098 13:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.098 13:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.098 13:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.400 13:49:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZmM1NWYzYzAxZmIxOTIxMDk2YjEzNzEyOGRhYzUwMmFlYjNhOThmNWQ0NGJmMjZmNjQwZDBhYTJjNWUwNzBjZa+XlBc=: 00:19:03.972 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.972 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.972 13:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.972 13:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.972 13:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.972 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:03.972 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.972 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.972 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:03.972 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:04.232 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:04.232 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.232 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.232 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:04.232 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:04.232 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.232 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.232 13:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.232 13:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.232 13:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.232 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.232 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.493 00:19:04.493 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.493 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.493 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.493 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.493 13:49:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.493 13:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.493 13:49:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.493 13:49:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.493 13:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.493 { 00:19:04.493 "cntlid": 97, 00:19:04.493 "qid": 0, 00:19:04.493 "state": "enabled", 00:19:04.493 "thread": "nvmf_tgt_poll_group_000", 00:19:04.493 "listen_address": { 00:19:04.493 "trtype": "TCP", 00:19:04.493 "adrfam": "IPv4", 00:19:04.493 "traddr": "10.0.0.2", 00:19:04.493 "trsvcid": "4420" 00:19:04.493 }, 00:19:04.493 "peer_address": { 00:19:04.493 "trtype": "TCP", 00:19:04.493 "adrfam": "IPv4", 00:19:04.493 "traddr": "10.0.0.1", 00:19:04.493 "trsvcid": "48386" 00:19:04.493 }, 00:19:04.493 "auth": { 00:19:04.493 "state": "completed", 00:19:04.493 "digest": "sha512", 00:19:04.493 "dhgroup": "null" 00:19:04.493 } 00:19:04.493 } 00:19:04.493 ]' 00:19:04.493 13:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.753 13:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.753 13:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.753 13:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:04.753 13:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.753 13:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.753 13:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.753 13:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.014 13:49:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDYzODg3ZTRiNTJiYjBmZWZjZDhiNmQ3Y2I1YzEwM2I3MGUzMjVjZWY5MDUyY2IwXjPuNA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YTZiZDRiOTExZWIxMjkxZTZkYmNhNGIxOGJhM2ZmNDIzMGUzOTYwZDlhODZiOTRlODk1NDlhNTRjOGYwMl0Cku4=: 00:19:05.586 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.586 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:05.586 13:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.586 13:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.586 13:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.586 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.586 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:05.586 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:05.847 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:05.847 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.847 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:05.847 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:05.847 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:05.847 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.847 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.847 13:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.847 13:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.847 13:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.847 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.847 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.107 00:19:06.107 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.107 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.107 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.107 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.107 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.107 13:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.107 13:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.367 13:49:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.367 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.367 { 00:19:06.367 "cntlid": 99, 00:19:06.367 "qid": 0, 00:19:06.367 "state": "enabled", 00:19:06.367 "thread": "nvmf_tgt_poll_group_000", 00:19:06.367 "listen_address": { 00:19:06.367 "trtype": "TCP", 00:19:06.367 "adrfam": "IPv4", 00:19:06.367 "traddr": "10.0.0.2", 00:19:06.367 "trsvcid": "4420" 00:19:06.368 }, 00:19:06.368 "peer_address": { 00:19:06.368 "trtype": "TCP", 00:19:06.368 "adrfam": "IPv4", 00:19:06.368 "traddr": "10.0.0.1", 00:19:06.368 "trsvcid": "48422" 00:19:06.368 }, 00:19:06.368 "auth": { 00:19:06.368 "state": "completed", 00:19:06.368 "digest": "sha512", 00:19:06.368 "dhgroup": "null" 00:19:06.368 } 00:19:06.368 } 00:19:06.368 ]' 00:19:06.368 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.368 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.368 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.368 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:06.368 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.368 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.368 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.368 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.628 13:49:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWFmMGViNDE1NTAwYTU3YTEzZTExZDE3NmQ2NGJlMze1D3Ss: --dhchap-ctrl-secret DHHC-1:02:MjFiZDc5OWFkMjUxNTJmZjJmMjg3MGZmMWE1M2U0MTA2MzFmOTA3ZTUyZTFjZWY324H40Q==: 00:19:07.213 13:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.213 13:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.213 13:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.213 13:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.213 13:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.213 13:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.213 13:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:07.213 13:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:07.482 13:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:07.482 13:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.482 13:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:07.482 13:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:07.482 13:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:07.482 13:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.482 13:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.482 13:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.482 13:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.482 13:49:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.482 13:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.482 13:49:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.747 00:19:07.747 13:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.747 13:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.747 13:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.747 13:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.747 13:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.747 13:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.747 13:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.747 13:49:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.747 13:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.747 { 00:19:07.747 "cntlid": 101, 00:19:07.747 "qid": 0, 00:19:07.747 "state": "enabled", 00:19:07.747 "thread": "nvmf_tgt_poll_group_000", 00:19:07.747 "listen_address": { 00:19:07.747 "trtype": "TCP", 00:19:07.747 "adrfam": "IPv4", 00:19:07.747 "traddr": "10.0.0.2", 00:19:07.747 "trsvcid": "4420" 00:19:07.747 }, 00:19:07.747 "peer_address": { 00:19:07.747 "trtype": "TCP", 00:19:07.747 "adrfam": "IPv4", 00:19:07.747 "traddr": "10.0.0.1", 00:19:07.747 "trsvcid": "48460" 00:19:07.747 }, 00:19:07.747 "auth": { 00:19:07.747 "state": "completed", 00:19:07.747 "digest": "sha512", 00:19:07.747 "dhgroup": "null" 00:19:07.747 } 00:19:07.747 } 00:19:07.747 ]' 00:19:08.008 13:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.008 13:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.008 13:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.008 13:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:08.008 13:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.008 13:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.008 13:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.008 13:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.268 13:49:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NDVmMGE2YjY2ZmUzMDlkYjIwNzhmZGVlN2EzZDZjMTc1ZmU2NDhiNThmZjlmMmFl/m3xsw==: --dhchap-ctrl-secret DHHC-1:01:NDBmMjAxZjFlZGQ1ZDg4M2ZiYzA2NDY5ODNhMmM0ZDJalygU: 00:19:08.838 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.838 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.838 13:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.838 13:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.838 13:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.838 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.838 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:08.838 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:09.099 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:09.099 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.099 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.099 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:09.099 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:09.099 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.099 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:09.099 13:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.099 13:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.099 13:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.099 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.099 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.359 00:19:09.359 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.359 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.359 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.359 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.359 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.359 13:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.359 13:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.359 13:49:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.359 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.359 { 00:19:09.359 "cntlid": 103, 00:19:09.359 "qid": 0, 00:19:09.359 "state": "enabled", 00:19:09.359 "thread": "nvmf_tgt_poll_group_000", 00:19:09.359 "listen_address": { 00:19:09.359 "trtype": "TCP", 00:19:09.359 "adrfam": "IPv4", 00:19:09.359 "traddr": "10.0.0.2", 00:19:09.359 "trsvcid": "4420" 00:19:09.359 }, 00:19:09.359 "peer_address": { 00:19:09.359 "trtype": "TCP", 00:19:09.359 "adrfam": "IPv4", 00:19:09.359 "traddr": "10.0.0.1", 00:19:09.359 "trsvcid": "48482" 00:19:09.359 }, 00:19:09.359 "auth": { 00:19:09.359 "state": "completed", 00:19:09.359 "digest": "sha512", 00:19:09.359 "dhgroup": "null" 00:19:09.359 } 00:19:09.359 } 00:19:09.359 ]' 00:19:09.359 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.619 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.619 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.619 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:09.619 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.619 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.619 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.619 13:49:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.620 13:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZmM1NWYzYzAxZmIxOTIxMDk2YjEzNzEyOGRhYzUwMmFlYjNhOThmNWQ0NGJmMjZmNjQwZDBhYTJjNWUwNzBjZa+XlBc=: 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.559 13:49:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.819 00:19:10.819 13:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.819 13:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.819 13:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.819 13:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.819 13:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.819 13:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.819 13:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.078 13:49:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.078 13:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.078 { 00:19:11.078 "cntlid": 105, 00:19:11.078 "qid": 0, 00:19:11.078 "state": "enabled", 00:19:11.078 "thread": "nvmf_tgt_poll_group_000", 00:19:11.078 "listen_address": { 00:19:11.078 "trtype": "TCP", 00:19:11.078 "adrfam": "IPv4", 00:19:11.078 "traddr": "10.0.0.2", 00:19:11.078 "trsvcid": "4420" 00:19:11.078 }, 00:19:11.078 "peer_address": { 00:19:11.078 "trtype": "TCP", 00:19:11.078 "adrfam": "IPv4", 00:19:11.078 "traddr": "10.0.0.1", 00:19:11.078 "trsvcid": "48524" 00:19:11.078 }, 00:19:11.078 "auth": { 00:19:11.078 "state": "completed", 00:19:11.078 "digest": "sha512", 00:19:11.078 "dhgroup": "ffdhe2048" 00:19:11.078 } 00:19:11.078 } 00:19:11.078 ]' 00:19:11.078 13:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.078 13:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.078 13:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.078 13:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:11.078 13:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.078 13:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.078 13:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.078 13:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.338 13:49:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDYzODg3ZTRiNTJiYjBmZWZjZDhiNmQ3Y2I1YzEwM2I3MGUzMjVjZWY5MDUyY2IwXjPuNA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YTZiZDRiOTExZWIxMjkxZTZkYmNhNGIxOGJhM2ZmNDIzMGUzOTYwZDlhODZiOTRlODk1NDlhNTRjOGYwMl0Cku4=: 00:19:11.906 13:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.906 13:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:11.906 13:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.906 13:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.166 13:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.166 13:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.166 13:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.166 13:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.166 13:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:12.166 13:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.166 13:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.166 13:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:12.166 13:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:12.166 13:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.166 13:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.166 13:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.166 13:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.166 13:49:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.166 13:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.166 13:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.426 00:19:12.426 13:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.426 13:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.426 13:49:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.713 13:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.713 13:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.713 13:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.713 13:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.713 13:49:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.713 13:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.713 { 00:19:12.713 "cntlid": 107, 00:19:12.713 "qid": 0, 00:19:12.713 "state": "enabled", 00:19:12.713 "thread": "nvmf_tgt_poll_group_000", 00:19:12.713 "listen_address": { 00:19:12.713 "trtype": "TCP", 00:19:12.713 "adrfam": "IPv4", 00:19:12.713 "traddr": "10.0.0.2", 00:19:12.713 "trsvcid": "4420" 00:19:12.713 }, 00:19:12.713 "peer_address": { 00:19:12.713 "trtype": "TCP", 00:19:12.713 "adrfam": "IPv4", 00:19:12.713 "traddr": "10.0.0.1", 00:19:12.713 "trsvcid": "34034" 00:19:12.713 }, 00:19:12.713 "auth": { 00:19:12.713 "state": "completed", 00:19:12.713 "digest": "sha512", 00:19:12.713 "dhgroup": "ffdhe2048" 00:19:12.713 } 00:19:12.713 } 00:19:12.713 ]' 00:19:12.713 13:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.713 13:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.713 13:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.713 13:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:12.713 13:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.713 13:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.713 13:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.713 13:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.972 13:49:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWFmMGViNDE1NTAwYTU3YTEzZTExZDE3NmQ2NGJlMze1D3Ss: --dhchap-ctrl-secret DHHC-1:02:MjFiZDc5OWFkMjUxNTJmZjJmMjg3MGZmMWE1M2U0MTA2MzFmOTA3ZTUyZTFjZWY324H40Q==: 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.912 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.172 00:19:14.172 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.172 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.172 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.172 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.172 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.172 13:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.172 13:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.172 13:49:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.172 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.172 { 00:19:14.172 "cntlid": 109, 00:19:14.172 "qid": 0, 00:19:14.172 "state": "enabled", 00:19:14.172 "thread": "nvmf_tgt_poll_group_000", 00:19:14.172 "listen_address": { 00:19:14.172 "trtype": "TCP", 00:19:14.172 "adrfam": "IPv4", 00:19:14.172 "traddr": "10.0.0.2", 00:19:14.172 "trsvcid": "4420" 00:19:14.172 }, 00:19:14.172 "peer_address": { 00:19:14.172 "trtype": "TCP", 00:19:14.172 "adrfam": "IPv4", 00:19:14.172 "traddr": "10.0.0.1", 00:19:14.172 "trsvcid": "34076" 00:19:14.172 }, 00:19:14.172 "auth": { 00:19:14.172 "state": "completed", 00:19:14.172 "digest": "sha512", 00:19:14.172 "dhgroup": "ffdhe2048" 00:19:14.172 } 00:19:14.172 } 00:19:14.172 ]' 00:19:14.172 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.432 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.432 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.432 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:14.432 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.432 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.432 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.432 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.692 13:49:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NDVmMGE2YjY2ZmUzMDlkYjIwNzhmZGVlN2EzZDZjMTc1ZmU2NDhiNThmZjlmMmFl/m3xsw==: --dhchap-ctrl-secret DHHC-1:01:NDBmMjAxZjFlZGQ1ZDg4M2ZiYzA2NDY5ODNhMmM0ZDJalygU: 00:19:15.288 13:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.288 13:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:15.288 13:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.288 13:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.288 13:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.288 13:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.288 13:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:15.288 13:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:15.288 13:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:15.288 13:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.288 13:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.288 13:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:15.288 13:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:15.288 13:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.288 13:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:15.288 13:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.288 13:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.548 13:49:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.548 13:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.548 13:49:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.548 00:19:15.548 13:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.548 13:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.548 13:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.808 13:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.808 13:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.808 13:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.808 13:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.808 13:49:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.808 13:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.808 { 00:19:15.808 "cntlid": 111, 00:19:15.808 "qid": 0, 00:19:15.808 "state": "enabled", 00:19:15.808 "thread": "nvmf_tgt_poll_group_000", 00:19:15.808 "listen_address": { 00:19:15.808 "trtype": "TCP", 00:19:15.808 "adrfam": "IPv4", 00:19:15.808 "traddr": "10.0.0.2", 00:19:15.808 "trsvcid": "4420" 00:19:15.808 }, 00:19:15.808 "peer_address": { 00:19:15.808 "trtype": "TCP", 00:19:15.808 "adrfam": "IPv4", 00:19:15.808 "traddr": "10.0.0.1", 00:19:15.808 "trsvcid": "34114" 00:19:15.808 }, 00:19:15.808 "auth": { 00:19:15.808 "state": "completed", 00:19:15.808 "digest": "sha512", 00:19:15.808 "dhgroup": "ffdhe2048" 00:19:15.808 } 00:19:15.808 } 00:19:15.808 ]' 00:19:15.808 13:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.808 13:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.808 13:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.808 13:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:15.808 13:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.068 13:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.068 13:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.068 13:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.068 13:49:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZmM1NWYzYzAxZmIxOTIxMDk2YjEzNzEyOGRhYzUwMmFlYjNhOThmNWQ0NGJmMjZmNjQwZDBhYTJjNWUwNzBjZa+XlBc=: 00:19:16.639 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.899 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.158 00:19:17.158 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.158 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.158 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.419 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.419 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.419 13:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.419 13:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.419 13:49:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.419 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.419 { 00:19:17.419 "cntlid": 113, 00:19:17.419 "qid": 0, 00:19:17.419 "state": "enabled", 00:19:17.419 "thread": "nvmf_tgt_poll_group_000", 00:19:17.419 "listen_address": { 00:19:17.419 "trtype": "TCP", 00:19:17.419 "adrfam": "IPv4", 00:19:17.419 "traddr": "10.0.0.2", 00:19:17.419 "trsvcid": "4420" 00:19:17.419 }, 00:19:17.419 "peer_address": { 00:19:17.419 "trtype": "TCP", 00:19:17.419 "adrfam": "IPv4", 00:19:17.419 "traddr": "10.0.0.1", 00:19:17.419 "trsvcid": "34144" 00:19:17.419 }, 00:19:17.419 "auth": { 00:19:17.419 "state": "completed", 00:19:17.419 "digest": "sha512", 00:19:17.419 "dhgroup": "ffdhe3072" 00:19:17.419 } 00:19:17.419 } 00:19:17.419 ]' 00:19:17.419 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.419 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.419 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.419 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:17.419 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.419 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.419 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.419 13:49:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.680 13:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDYzODg3ZTRiNTJiYjBmZWZjZDhiNmQ3Y2I1YzEwM2I3MGUzMjVjZWY5MDUyY2IwXjPuNA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YTZiZDRiOTExZWIxMjkxZTZkYmNhNGIxOGJhM2ZmNDIzMGUzOTYwZDlhODZiOTRlODk1NDlhNTRjOGYwMl0Cku4=: 00:19:18.620 13:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.620 13:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.620 13:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.620 13:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.620 13:49:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.620 13:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.620 13:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.620 13:49:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.620 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:18.620 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.620 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.620 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:18.620 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:18.620 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.620 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.620 13:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.620 13:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.620 13:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.620 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.620 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.880 00:19:18.880 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.880 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.880 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.141 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.141 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.141 13:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.141 13:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.141 13:49:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.141 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.141 { 00:19:19.141 "cntlid": 115, 00:19:19.141 "qid": 0, 00:19:19.141 "state": "enabled", 00:19:19.141 "thread": "nvmf_tgt_poll_group_000", 00:19:19.141 "listen_address": { 00:19:19.141 "trtype": "TCP", 00:19:19.141 "adrfam": "IPv4", 00:19:19.141 "traddr": "10.0.0.2", 00:19:19.141 "trsvcid": "4420" 00:19:19.141 }, 00:19:19.141 "peer_address": { 00:19:19.141 "trtype": "TCP", 00:19:19.141 "adrfam": "IPv4", 00:19:19.141 "traddr": "10.0.0.1", 00:19:19.141 "trsvcid": "34178" 00:19:19.141 }, 00:19:19.141 "auth": { 00:19:19.141 "state": "completed", 00:19:19.141 "digest": "sha512", 00:19:19.141 "dhgroup": "ffdhe3072" 00:19:19.141 } 00:19:19.141 } 00:19:19.141 ]' 00:19:19.141 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.141 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.141 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.141 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:19.141 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.141 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.141 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.141 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.401 13:49:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWFmMGViNDE1NTAwYTU3YTEzZTExZDE3NmQ2NGJlMze1D3Ss: --dhchap-ctrl-secret DHHC-1:02:MjFiZDc5OWFkMjUxNTJmZjJmMjg3MGZmMWE1M2U0MTA2MzFmOTA3ZTUyZTFjZWY324H40Q==: 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.341 13:49:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.601 00:19:20.601 13:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.601 13:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.601 13:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.861 13:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.861 13:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.861 13:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.861 13:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.861 13:49:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.861 13:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.861 { 00:19:20.861 "cntlid": 117, 00:19:20.861 "qid": 0, 00:19:20.861 "state": "enabled", 00:19:20.861 "thread": "nvmf_tgt_poll_group_000", 00:19:20.861 "listen_address": { 00:19:20.861 "trtype": "TCP", 00:19:20.861 "adrfam": "IPv4", 00:19:20.861 "traddr": "10.0.0.2", 00:19:20.861 "trsvcid": "4420" 00:19:20.861 }, 00:19:20.861 "peer_address": { 00:19:20.861 "trtype": "TCP", 00:19:20.861 "adrfam": "IPv4", 00:19:20.861 "traddr": "10.0.0.1", 00:19:20.861 "trsvcid": "34204" 00:19:20.861 }, 00:19:20.861 "auth": { 00:19:20.861 "state": "completed", 00:19:20.861 "digest": "sha512", 00:19:20.861 "dhgroup": "ffdhe3072" 00:19:20.861 } 00:19:20.861 } 00:19:20.861 ]' 00:19:20.861 13:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.861 13:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.861 13:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.861 13:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.861 13:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.861 13:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.861 13:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.861 13:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.121 13:49:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NDVmMGE2YjY2ZmUzMDlkYjIwNzhmZGVlN2EzZDZjMTc1ZmU2NDhiNThmZjlmMmFl/m3xsw==: --dhchap-ctrl-secret DHHC-1:01:NDBmMjAxZjFlZGQ1ZDg4M2ZiYzA2NDY5ODNhMmM0ZDJalygU: 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.065 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.325 00:19:22.325 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.325 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.325 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.585 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.585 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.585 13:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.585 13:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.585 13:49:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.585 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.585 { 00:19:22.585 "cntlid": 119, 00:19:22.585 "qid": 0, 00:19:22.585 "state": "enabled", 00:19:22.585 "thread": "nvmf_tgt_poll_group_000", 00:19:22.585 "listen_address": { 00:19:22.585 "trtype": "TCP", 00:19:22.585 "adrfam": "IPv4", 00:19:22.585 "traddr": "10.0.0.2", 00:19:22.585 "trsvcid": "4420" 00:19:22.585 }, 00:19:22.585 "peer_address": { 00:19:22.585 "trtype": "TCP", 00:19:22.585 "adrfam": "IPv4", 00:19:22.585 "traddr": "10.0.0.1", 00:19:22.585 "trsvcid": "38150" 00:19:22.585 }, 00:19:22.585 "auth": { 00:19:22.585 "state": "completed", 00:19:22.585 "digest": "sha512", 00:19:22.585 "dhgroup": "ffdhe3072" 00:19:22.585 } 00:19:22.585 } 00:19:22.585 ]' 00:19:22.585 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.585 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.585 13:49:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.585 13:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:22.585 13:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.585 13:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.585 13:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.585 13:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.845 13:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZmM1NWYzYzAxZmIxOTIxMDk2YjEzNzEyOGRhYzUwMmFlYjNhOThmNWQ0NGJmMjZmNjQwZDBhYTJjNWUwNzBjZa+XlBc=: 00:19:23.417 13:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.677 13:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.677 13:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.677 13:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.677 13:49:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.677 13:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.677 13:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.677 13:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:23.677 13:49:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:23.677 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:23.677 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.677 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:23.677 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:23.677 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:23.677 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.677 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.677 13:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.677 13:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.677 13:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.677 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.677 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.937 00:19:23.937 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.937 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.937 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.197 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.197 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.197 13:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.197 13:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.197 13:49:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.197 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.197 { 00:19:24.197 "cntlid": 121, 00:19:24.197 "qid": 0, 00:19:24.197 "state": "enabled", 00:19:24.197 "thread": "nvmf_tgt_poll_group_000", 00:19:24.197 "listen_address": { 00:19:24.197 "trtype": "TCP", 00:19:24.197 "adrfam": "IPv4", 00:19:24.197 "traddr": "10.0.0.2", 00:19:24.197 "trsvcid": "4420" 00:19:24.197 }, 00:19:24.197 "peer_address": { 00:19:24.197 "trtype": "TCP", 00:19:24.197 "adrfam": "IPv4", 00:19:24.197 "traddr": "10.0.0.1", 00:19:24.197 "trsvcid": "38196" 00:19:24.197 }, 00:19:24.197 "auth": { 00:19:24.197 "state": "completed", 00:19:24.197 "digest": "sha512", 00:19:24.197 "dhgroup": "ffdhe4096" 00:19:24.197 } 00:19:24.197 } 00:19:24.197 ]' 00:19:24.197 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.197 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.197 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.197 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:24.197 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.456 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.456 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.456 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.456 13:49:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDYzODg3ZTRiNTJiYjBmZWZjZDhiNmQ3Y2I1YzEwM2I3MGUzMjVjZWY5MDUyY2IwXjPuNA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YTZiZDRiOTExZWIxMjkxZTZkYmNhNGIxOGJhM2ZmNDIzMGUzOTYwZDlhODZiOTRlODk1NDlhNTRjOGYwMl0Cku4=: 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.395 13:49:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.655 00:19:25.655 13:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.655 13:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.655 13:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.915 13:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.915 13:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.915 13:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.915 13:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.915 13:49:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.915 13:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.915 { 00:19:25.915 "cntlid": 123, 00:19:25.915 "qid": 0, 00:19:25.915 "state": "enabled", 00:19:25.915 "thread": "nvmf_tgt_poll_group_000", 00:19:25.915 "listen_address": { 00:19:25.915 "trtype": "TCP", 00:19:25.915 "adrfam": "IPv4", 00:19:25.915 "traddr": "10.0.0.2", 00:19:25.915 "trsvcid": "4420" 00:19:25.915 }, 00:19:25.915 "peer_address": { 00:19:25.915 "trtype": "TCP", 00:19:25.915 "adrfam": "IPv4", 00:19:25.915 "traddr": "10.0.0.1", 00:19:25.915 "trsvcid": "38220" 00:19:25.915 }, 00:19:25.915 "auth": { 00:19:25.915 "state": "completed", 00:19:25.915 "digest": "sha512", 00:19:25.915 "dhgroup": "ffdhe4096" 00:19:25.915 } 00:19:25.915 } 00:19:25.915 ]' 00:19:25.915 13:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.915 13:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.915 13:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.915 13:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:25.915 13:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.915 13:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.915 13:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.915 13:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.175 13:49:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWFmMGViNDE1NTAwYTU3YTEzZTExZDE3NmQ2NGJlMze1D3Ss: --dhchap-ctrl-secret DHHC-1:02:MjFiZDc5OWFkMjUxNTJmZjJmMjg3MGZmMWE1M2U0MTA2MzFmOTA3ZTUyZTFjZWY324H40Q==: 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.113 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.373 00:19:27.373 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.373 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.373 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.633 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.633 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.633 13:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.633 13:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.633 13:49:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.633 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.633 { 00:19:27.633 "cntlid": 125, 00:19:27.633 "qid": 0, 00:19:27.633 "state": "enabled", 00:19:27.633 "thread": "nvmf_tgt_poll_group_000", 00:19:27.633 "listen_address": { 00:19:27.633 "trtype": "TCP", 00:19:27.633 "adrfam": "IPv4", 00:19:27.633 "traddr": "10.0.0.2", 00:19:27.633 "trsvcid": "4420" 00:19:27.633 }, 00:19:27.633 "peer_address": { 00:19:27.633 "trtype": "TCP", 00:19:27.633 "adrfam": "IPv4", 00:19:27.633 "traddr": "10.0.0.1", 00:19:27.633 "trsvcid": "38252" 00:19:27.633 }, 00:19:27.633 "auth": { 00:19:27.633 "state": "completed", 00:19:27.633 "digest": "sha512", 00:19:27.633 "dhgroup": "ffdhe4096" 00:19:27.633 } 00:19:27.633 } 00:19:27.633 ]' 00:19:27.633 13:49:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.633 13:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.633 13:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.633 13:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.633 13:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.633 13:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.633 13:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.633 13:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.894 13:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NDVmMGE2YjY2ZmUzMDlkYjIwNzhmZGVlN2EzZDZjMTc1ZmU2NDhiNThmZjlmMmFl/m3xsw==: --dhchap-ctrl-secret DHHC-1:01:NDBmMjAxZjFlZGQ1ZDg4M2ZiYzA2NDY5ODNhMmM0ZDJalygU: 00:19:28.834 13:49:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.834 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:28.834 13:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.834 13:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.834 13:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.834 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.834 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:28.834 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:28.834 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:28.834 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.834 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.834 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:28.834 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:28.834 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.834 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:28.834 13:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.834 13:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.834 13:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.834 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.834 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:29.094 00:19:29.094 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.094 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.094 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.354 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.354 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.354 13:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.354 13:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.354 13:49:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.354 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.354 { 00:19:29.354 "cntlid": 127, 00:19:29.354 "qid": 0, 00:19:29.354 "state": "enabled", 00:19:29.354 "thread": "nvmf_tgt_poll_group_000", 00:19:29.354 "listen_address": { 00:19:29.354 "trtype": "TCP", 00:19:29.354 "adrfam": "IPv4", 00:19:29.354 "traddr": "10.0.0.2", 00:19:29.354 "trsvcid": "4420" 00:19:29.354 }, 00:19:29.354 "peer_address": { 00:19:29.354 "trtype": "TCP", 00:19:29.354 "adrfam": "IPv4", 00:19:29.354 "traddr": "10.0.0.1", 00:19:29.354 "trsvcid": "38276" 00:19:29.354 }, 00:19:29.354 "auth": { 00:19:29.354 "state": "completed", 00:19:29.354 "digest": "sha512", 00:19:29.354 "dhgroup": "ffdhe4096" 00:19:29.354 } 00:19:29.354 } 00:19:29.354 ]' 00:19:29.354 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.354 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.354 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.354 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.354 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.354 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.354 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.354 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.614 13:49:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZmM1NWYzYzAxZmIxOTIxMDk2YjEzNzEyOGRhYzUwMmFlYjNhOThmNWQ0NGJmMjZmNjQwZDBhYTJjNWUwNzBjZa+XlBc=: 00:19:30.219 13:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.219 13:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:30.219 13:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.219 13:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.219 13:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.219 13:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.219 13:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.219 13:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:30.219 13:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:30.479 13:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:30.479 13:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.479 13:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.479 13:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:30.479 13:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:30.479 13:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.479 13:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.479 13:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.479 13:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.479 13:49:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.479 13:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.479 13:49:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.738 00:19:30.738 13:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.738 13:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.738 13:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.998 13:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.998 13:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.998 13:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.998 13:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.998 13:49:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.998 13:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.998 { 00:19:30.998 "cntlid": 129, 00:19:30.998 "qid": 0, 00:19:30.998 "state": "enabled", 00:19:30.998 "thread": "nvmf_tgt_poll_group_000", 00:19:30.998 "listen_address": { 00:19:30.998 "trtype": "TCP", 00:19:30.998 "adrfam": "IPv4", 00:19:30.998 "traddr": "10.0.0.2", 00:19:30.998 "trsvcid": "4420" 00:19:30.998 }, 00:19:30.998 "peer_address": { 00:19:30.998 "trtype": "TCP", 00:19:30.998 "adrfam": "IPv4", 00:19:30.998 "traddr": "10.0.0.1", 00:19:30.998 "trsvcid": "38306" 00:19:30.998 }, 00:19:30.998 "auth": { 00:19:30.998 "state": "completed", 00:19:30.998 "digest": "sha512", 00:19:30.998 "dhgroup": "ffdhe6144" 00:19:30.998 } 00:19:30.998 } 00:19:30.998 ]' 00:19:30.998 13:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.998 13:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.998 13:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.998 13:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:30.998 13:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.998 13:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.998 13:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.998 13:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.258 13:49:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDYzODg3ZTRiNTJiYjBmZWZjZDhiNmQ3Y2I1YzEwM2I3MGUzMjVjZWY5MDUyY2IwXjPuNA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YTZiZDRiOTExZWIxMjkxZTZkYmNhNGIxOGJhM2ZmNDIzMGUzOTYwZDlhODZiOTRlODk1NDlhNTRjOGYwMl0Cku4=: 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.197 13:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.457 00:19:32.457 13:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.457 13:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.457 13:49:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.717 13:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.717 13:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.717 13:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.717 13:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.717 13:49:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.718 13:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.718 { 00:19:32.718 "cntlid": 131, 00:19:32.718 "qid": 0, 00:19:32.718 "state": "enabled", 00:19:32.718 "thread": "nvmf_tgt_poll_group_000", 00:19:32.718 "listen_address": { 00:19:32.718 "trtype": "TCP", 00:19:32.718 "adrfam": "IPv4", 00:19:32.718 "traddr": "10.0.0.2", 00:19:32.718 "trsvcid": "4420" 00:19:32.718 }, 00:19:32.718 "peer_address": { 00:19:32.718 "trtype": "TCP", 00:19:32.718 "adrfam": "IPv4", 00:19:32.718 "traddr": "10.0.0.1", 00:19:32.718 "trsvcid": "41682" 00:19:32.718 }, 00:19:32.718 "auth": { 00:19:32.718 "state": "completed", 00:19:32.718 "digest": "sha512", 00:19:32.718 "dhgroup": "ffdhe6144" 00:19:32.718 } 00:19:32.718 } 00:19:32.718 ]' 00:19:32.718 13:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.718 13:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.718 13:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.718 13:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:32.718 13:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.977 13:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.977 13:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.977 13:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.977 13:49:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWFmMGViNDE1NTAwYTU3YTEzZTExZDE3NmQ2NGJlMze1D3Ss: --dhchap-ctrl-secret DHHC-1:02:MjFiZDc5OWFkMjUxNTJmZjJmMjg3MGZmMWE1M2U0MTA2MzFmOTA3ZTUyZTFjZWY324H40Q==: 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.918 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.177 00:19:34.437 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.437 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.437 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.437 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.437 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.437 13:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.437 13:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.437 13:50:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.437 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.437 { 00:19:34.437 "cntlid": 133, 00:19:34.437 "qid": 0, 00:19:34.437 "state": "enabled", 00:19:34.437 "thread": "nvmf_tgt_poll_group_000", 00:19:34.437 "listen_address": { 00:19:34.437 "trtype": "TCP", 00:19:34.437 "adrfam": "IPv4", 00:19:34.437 "traddr": "10.0.0.2", 00:19:34.437 "trsvcid": "4420" 00:19:34.437 }, 00:19:34.437 "peer_address": { 00:19:34.437 "trtype": "TCP", 00:19:34.437 "adrfam": "IPv4", 00:19:34.437 "traddr": "10.0.0.1", 00:19:34.437 "trsvcid": "41710" 00:19:34.437 }, 00:19:34.437 "auth": { 00:19:34.437 "state": "completed", 00:19:34.437 "digest": "sha512", 00:19:34.437 "dhgroup": "ffdhe6144" 00:19:34.437 } 00:19:34.437 } 00:19:34.437 ]' 00:19:34.437 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.437 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.437 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.696 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:34.696 13:50:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.696 13:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.697 13:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.697 13:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.697 13:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NDVmMGE2YjY2ZmUzMDlkYjIwNzhmZGVlN2EzZDZjMTc1ZmU2NDhiNThmZjlmMmFl/m3xsw==: --dhchap-ctrl-secret DHHC-1:01:NDBmMjAxZjFlZGQ1ZDg4M2ZiYzA2NDY5ODNhMmM0ZDJalygU: 00:19:35.635 13:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.635 13:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:35.635 13:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.635 13:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.635 13:50:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.635 13:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.635 13:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.635 13:50:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.635 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:35.635 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.635 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:35.635 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:35.635 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:35.635 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.635 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:35.635 13:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.635 13:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.635 13:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.635 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.635 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.203 00:19:36.203 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.203 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.203 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.203 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.203 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.203 13:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.203 13:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.203 13:50:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.203 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.203 { 00:19:36.203 "cntlid": 135, 00:19:36.203 "qid": 0, 00:19:36.203 "state": "enabled", 00:19:36.203 "thread": "nvmf_tgt_poll_group_000", 00:19:36.203 "listen_address": { 00:19:36.203 "trtype": "TCP", 00:19:36.203 "adrfam": "IPv4", 00:19:36.203 "traddr": "10.0.0.2", 00:19:36.203 "trsvcid": "4420" 00:19:36.203 }, 00:19:36.203 "peer_address": { 00:19:36.203 "trtype": "TCP", 00:19:36.203 "adrfam": "IPv4", 00:19:36.203 "traddr": "10.0.0.1", 00:19:36.203 "trsvcid": "41730" 00:19:36.203 }, 00:19:36.203 "auth": { 00:19:36.203 "state": "completed", 00:19:36.203 "digest": "sha512", 00:19:36.203 "dhgroup": "ffdhe6144" 00:19:36.203 } 00:19:36.203 } 00:19:36.203 ]' 00:19:36.203 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.203 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.203 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.463 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.464 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.464 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.464 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.464 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.464 13:50:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZmM1NWYzYzAxZmIxOTIxMDk2YjEzNzEyOGRhYzUwMmFlYjNhOThmNWQ0NGJmMjZmNjQwZDBhYTJjNWUwNzBjZa+XlBc=: 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.405 13:50:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.974 00:19:37.974 13:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.974 13:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.974 13:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.254 13:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.255 13:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.255 13:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.255 13:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.255 13:50:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.255 13:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.255 { 00:19:38.255 "cntlid": 137, 00:19:38.255 "qid": 0, 00:19:38.255 "state": "enabled", 00:19:38.255 "thread": "nvmf_tgt_poll_group_000", 00:19:38.255 "listen_address": { 00:19:38.255 "trtype": "TCP", 00:19:38.255 "adrfam": "IPv4", 00:19:38.255 "traddr": "10.0.0.2", 00:19:38.255 "trsvcid": "4420" 00:19:38.255 }, 00:19:38.255 "peer_address": { 00:19:38.255 "trtype": "TCP", 00:19:38.255 "adrfam": "IPv4", 00:19:38.255 "traddr": "10.0.0.1", 00:19:38.255 "trsvcid": "41750" 00:19:38.255 }, 00:19:38.255 "auth": { 00:19:38.255 "state": "completed", 00:19:38.255 "digest": "sha512", 00:19:38.255 "dhgroup": "ffdhe8192" 00:19:38.255 } 00:19:38.255 } 00:19:38.255 ]' 00:19:38.255 13:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.255 13:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.255 13:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.255 13:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:38.255 13:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.255 13:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.255 13:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.255 13:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.515 13:50:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDYzODg3ZTRiNTJiYjBmZWZjZDhiNmQ3Y2I1YzEwM2I3MGUzMjVjZWY5MDUyY2IwXjPuNA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YTZiZDRiOTExZWIxMjkxZTZkYmNhNGIxOGJhM2ZmNDIzMGUzOTYwZDlhODZiOTRlODk1NDlhNTRjOGYwMl0Cku4=: 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.454 13:50:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.024 00:19:40.024 13:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.024 13:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.024 13:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.024 13:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.024 13:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.024 13:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.024 13:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.024 13:50:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.024 13:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.024 { 00:19:40.024 "cntlid": 139, 00:19:40.024 "qid": 0, 00:19:40.024 "state": "enabled", 00:19:40.024 "thread": "nvmf_tgt_poll_group_000", 00:19:40.024 "listen_address": { 00:19:40.024 "trtype": "TCP", 00:19:40.025 "adrfam": "IPv4", 00:19:40.025 "traddr": "10.0.0.2", 00:19:40.025 "trsvcid": "4420" 00:19:40.025 }, 00:19:40.025 "peer_address": { 00:19:40.025 "trtype": "TCP", 00:19:40.025 "adrfam": "IPv4", 00:19:40.025 "traddr": "10.0.0.1", 00:19:40.025 "trsvcid": "41768" 00:19:40.025 }, 00:19:40.025 "auth": { 00:19:40.025 "state": "completed", 00:19:40.025 "digest": "sha512", 00:19:40.025 "dhgroup": "ffdhe8192" 00:19:40.025 } 00:19:40.025 } 00:19:40.025 ]' 00:19:40.286 13:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.286 13:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.286 13:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.286 13:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:40.286 13:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.286 13:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.286 13:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.286 13:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.547 13:50:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWFmMGViNDE1NTAwYTU3YTEzZTExZDE3NmQ2NGJlMze1D3Ss: --dhchap-ctrl-secret DHHC-1:02:MjFiZDc5OWFkMjUxNTJmZjJmMjg3MGZmMWE1M2U0MTA2MzFmOTA3ZTUyZTFjZWY324H40Q==: 00:19:41.116 13:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.116 13:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.116 13:50:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.116 13:50:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.116 13:50:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.116 13:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.117 13:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:41.117 13:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:41.377 13:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:41.377 13:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.377 13:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:41.377 13:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:41.377 13:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:41.377 13:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.377 13:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.377 13:50:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.377 13:50:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.377 13:50:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.377 13:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.377 13:50:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.947 00:19:41.947 13:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.947 13:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.947 13:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.207 13:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.207 13:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.207 13:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.207 13:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.207 13:50:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.207 13:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.207 { 00:19:42.207 "cntlid": 141, 00:19:42.207 "qid": 0, 00:19:42.207 "state": "enabled", 00:19:42.207 "thread": "nvmf_tgt_poll_group_000", 00:19:42.207 "listen_address": { 00:19:42.207 "trtype": "TCP", 00:19:42.207 "adrfam": "IPv4", 00:19:42.207 "traddr": "10.0.0.2", 00:19:42.207 "trsvcid": "4420" 00:19:42.207 }, 00:19:42.207 "peer_address": { 00:19:42.207 "trtype": "TCP", 00:19:42.207 "adrfam": "IPv4", 00:19:42.207 "traddr": "10.0.0.1", 00:19:42.207 "trsvcid": "46984" 00:19:42.207 }, 00:19:42.207 "auth": { 00:19:42.207 "state": "completed", 00:19:42.207 "digest": "sha512", 00:19:42.207 "dhgroup": "ffdhe8192" 00:19:42.207 } 00:19:42.207 } 00:19:42.207 ]' 00:19:42.207 13:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.207 13:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.207 13:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.207 13:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:42.207 13:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.207 13:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.207 13:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.207 13:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.467 13:50:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NDVmMGE2YjY2ZmUzMDlkYjIwNzhmZGVlN2EzZDZjMTc1ZmU2NDhiNThmZjlmMmFl/m3xsw==: --dhchap-ctrl-secret DHHC-1:01:NDBmMjAxZjFlZGQ1ZDg4M2ZiYzA2NDY5ODNhMmM0ZDJalygU: 00:19:43.037 13:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.037 13:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:43.037 13:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.037 13:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.037 13:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.037 13:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.037 13:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:43.037 13:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:43.297 13:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:43.297 13:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.297 13:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:43.297 13:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:43.297 13:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:43.297 13:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.297 13:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:43.297 13:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.297 13:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.297 13:50:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.297 13:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.297 13:50:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.868 00:19:43.868 13:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.868 13:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.868 13:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.128 13:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.128 13:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.128 13:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.128 13:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.128 13:50:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.128 13:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.128 { 00:19:44.128 "cntlid": 143, 00:19:44.128 "qid": 0, 00:19:44.128 "state": "enabled", 00:19:44.128 "thread": "nvmf_tgt_poll_group_000", 00:19:44.128 "listen_address": { 00:19:44.128 "trtype": "TCP", 00:19:44.128 "adrfam": "IPv4", 00:19:44.128 "traddr": "10.0.0.2", 00:19:44.128 "trsvcid": "4420" 00:19:44.128 }, 00:19:44.128 "peer_address": { 00:19:44.128 "trtype": "TCP", 00:19:44.128 "adrfam": "IPv4", 00:19:44.128 "traddr": "10.0.0.1", 00:19:44.128 "trsvcid": "46998" 00:19:44.128 }, 00:19:44.128 "auth": { 00:19:44.128 "state": "completed", 00:19:44.128 "digest": "sha512", 00:19:44.128 "dhgroup": "ffdhe8192" 00:19:44.128 } 00:19:44.128 } 00:19:44.128 ]' 00:19:44.128 13:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.128 13:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.128 13:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.128 13:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:44.128 13:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.128 13:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.128 13:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.128 13:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.389 13:50:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZmM1NWYzYzAxZmIxOTIxMDk2YjEzNzEyOGRhYzUwMmFlYjNhOThmNWQ0NGJmMjZmNjQwZDBhYTJjNWUwNzBjZa+XlBc=: 00:19:44.975 13:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.975 13:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.975 13:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.975 13:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.975 13:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.267 13:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:45.267 13:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:45.267 13:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:45.267 13:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:45.267 13:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:45.267 13:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:45.267 13:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:45.267 13:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.267 13:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:45.267 13:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:45.267 13:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:45.267 13:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.267 13:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.267 13:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.267 13:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.267 13:50:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.267 13:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.267 13:50:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.837 00:19:45.837 13:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.837 13:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.837 13:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.837 13:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.837 13:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.837 13:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.837 13:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.837 13:50:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.121 13:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.121 { 00:19:46.121 "cntlid": 145, 00:19:46.121 "qid": 0, 00:19:46.121 "state": "enabled", 00:19:46.121 "thread": "nvmf_tgt_poll_group_000", 00:19:46.121 "listen_address": { 00:19:46.121 "trtype": "TCP", 00:19:46.121 "adrfam": "IPv4", 00:19:46.121 "traddr": "10.0.0.2", 00:19:46.121 "trsvcid": "4420" 00:19:46.121 }, 00:19:46.121 "peer_address": { 00:19:46.121 "trtype": "TCP", 00:19:46.121 "adrfam": "IPv4", 00:19:46.121 "traddr": "10.0.0.1", 00:19:46.121 "trsvcid": "47028" 00:19:46.121 }, 00:19:46.121 "auth": { 00:19:46.121 "state": "completed", 00:19:46.121 "digest": "sha512", 00:19:46.121 "dhgroup": "ffdhe8192" 00:19:46.121 } 00:19:46.121 } 00:19:46.121 ]' 00:19:46.121 13:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.121 13:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.121 13:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.121 13:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:46.121 13:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.121 13:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.121 13:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.121 13:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.121 13:50:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZDYzODg3ZTRiNTJiYjBmZWZjZDhiNmQ3Y2I1YzEwM2I3MGUzMjVjZWY5MDUyY2IwXjPuNA==: --dhchap-ctrl-secret DHHC-1:03:ZmY4YTZiZDRiOTExZWIxMjkxZTZkYmNhNGIxOGJhM2ZmNDIzMGUzOTYwZDlhODZiOTRlODk1NDlhNTRjOGYwMl0Cku4=: 00:19:47.061 13:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.061 13:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.061 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.061 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.061 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.061 13:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:47.061 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.061 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.061 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.061 13:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:47.061 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:47.061 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:47.061 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:47.061 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.061 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:47.061 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.061 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:47.061 13:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:47.630 request: 00:19:47.630 { 00:19:47.630 "name": "nvme0", 00:19:47.630 "trtype": "tcp", 00:19:47.630 "traddr": "10.0.0.2", 00:19:47.630 "adrfam": "ipv4", 00:19:47.630 "trsvcid": "4420", 00:19:47.630 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:47.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:47.630 "prchk_reftag": false, 00:19:47.630 "prchk_guard": false, 00:19:47.630 "hdgst": false, 00:19:47.630 "ddgst": false, 00:19:47.630 "dhchap_key": "key2", 00:19:47.630 "method": "bdev_nvme_attach_controller", 00:19:47.630 "req_id": 1 00:19:47.630 } 00:19:47.630 Got JSON-RPC error response 00:19:47.630 response: 00:19:47.630 { 00:19:47.630 "code": -5, 00:19:47.630 "message": "Input/output error" 00:19:47.630 } 00:19:47.630 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:47.630 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:47.630 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:47.630 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:47.630 13:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.630 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.630 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.630 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.630 13:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.630 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.630 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.630 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.630 13:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:47.630 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:47.630 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:47.631 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:47.631 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.631 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:47.631 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.631 13:50:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:47.631 13:50:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:47.890 request: 00:19:47.890 { 00:19:47.890 "name": "nvme0", 00:19:47.890 "trtype": "tcp", 00:19:47.890 "traddr": "10.0.0.2", 00:19:47.890 "adrfam": "ipv4", 00:19:47.890 "trsvcid": "4420", 00:19:47.890 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:47.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:47.890 "prchk_reftag": false, 00:19:47.890 "prchk_guard": false, 00:19:47.890 "hdgst": false, 00:19:47.890 "ddgst": false, 00:19:47.890 "dhchap_key": "key1", 00:19:47.890 "dhchap_ctrlr_key": "ckey2", 00:19:47.890 "method": "bdev_nvme_attach_controller", 00:19:47.890 "req_id": 1 00:19:47.890 } 00:19:47.890 Got JSON-RPC error response 00:19:47.890 response: 00:19:47.890 { 00:19:47.890 "code": -5, 00:19:47.890 "message": "Input/output error" 00:19:47.890 } 00:19:48.149 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:48.149 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:48.149 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:48.149 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:48.149 13:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:48.149 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.149 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.150 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.150 13:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:48.150 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.150 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.150 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.150 13:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.150 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:48.150 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.150 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:48.150 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:48.150 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:48.150 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:48.150 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.150 13:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.410 request: 00:19:48.410 { 00:19:48.410 "name": "nvme0", 00:19:48.410 "trtype": "tcp", 00:19:48.410 "traddr": "10.0.0.2", 00:19:48.410 "adrfam": "ipv4", 00:19:48.410 "trsvcid": "4420", 00:19:48.410 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:48.410 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:48.410 "prchk_reftag": false, 00:19:48.410 "prchk_guard": false, 00:19:48.410 "hdgst": false, 00:19:48.410 "ddgst": false, 00:19:48.410 "dhchap_key": "key1", 00:19:48.410 "dhchap_ctrlr_key": "ckey1", 00:19:48.410 "method": "bdev_nvme_attach_controller", 00:19:48.410 "req_id": 1 00:19:48.410 } 00:19:48.410 Got JSON-RPC error response 00:19:48.410 response: 00:19:48.410 { 00:19:48.410 "code": -5, 00:19:48.410 "message": "Input/output error" 00:19:48.410 } 00:19:48.410 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:48.410 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:48.410 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:48.410 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:48.410 13:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:48.410 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.410 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.669 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.669 13:50:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1081788 00:19:48.669 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1081788 ']' 00:19:48.669 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1081788 00:19:48.669 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:48.669 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:48.669 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1081788 00:19:48.669 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:48.669 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:48.669 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1081788' 00:19:48.669 killing process with pid 1081788 00:19:48.669 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1081788 00:19:48.669 13:50:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1081788 00:19:48.669 13:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:48.669 13:50:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:48.669 13:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:48.669 13:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.669 13:50:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1108571 00:19:48.669 13:50:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1108571 00:19:48.669 13:50:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:48.669 13:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1108571 ']' 00:19:48.669 13:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.669 13:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.669 13:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.669 13:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.669 13:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.609 13:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.609 13:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:49.609 13:50:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:49.609 13:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:49.609 13:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.609 13:50:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.609 13:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:49.609 13:50:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1108571 00:19:49.609 13:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1108571 ']' 00:19:49.609 13:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.609 13:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.609 13:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.609 13:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.609 13:50:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.869 13:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.869 13:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:49.869 13:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:49.869 13:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.869 13:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.869 13:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.869 13:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:49.869 13:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.869 13:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:49.869 13:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:49.869 13:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:49.869 13:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.869 13:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:49.869 13:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.869 13:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.869 13:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.869 13:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.869 13:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.439 00:19:50.439 13:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.439 13:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.439 13:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.439 13:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.439 13:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.439 13:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.439 13:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.439 13:50:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.439 13:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.439 { 00:19:50.439 "cntlid": 1, 00:19:50.439 "qid": 0, 00:19:50.439 "state": "enabled", 00:19:50.439 "thread": "nvmf_tgt_poll_group_000", 00:19:50.439 "listen_address": { 00:19:50.439 "trtype": "TCP", 00:19:50.439 "adrfam": "IPv4", 00:19:50.439 "traddr": "10.0.0.2", 00:19:50.439 "trsvcid": "4420" 00:19:50.439 }, 00:19:50.439 "peer_address": { 00:19:50.439 "trtype": "TCP", 00:19:50.439 "adrfam": "IPv4", 00:19:50.439 "traddr": "10.0.0.1", 00:19:50.439 "trsvcid": "47090" 00:19:50.439 }, 00:19:50.439 "auth": { 00:19:50.439 "state": "completed", 00:19:50.439 "digest": "sha512", 00:19:50.439 "dhgroup": "ffdhe8192" 00:19:50.439 } 00:19:50.439 } 00:19:50.439 ]' 00:19:50.439 13:50:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.700 13:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.700 13:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.700 13:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:50.700 13:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.700 13:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.700 13:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.700 13:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.961 13:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ZmM1NWYzYzAxZmIxOTIxMDk2YjEzNzEyOGRhYzUwMmFlYjNhOThmNWQ0NGJmMjZmNjQwZDBhYTJjNWUwNzBjZa+XlBc=: 00:19:51.531 13:50:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.531 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.531 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.531 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.531 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.531 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:51.531 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.531 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.531 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.531 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:51.531 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:51.792 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.792 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:51.792 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.792 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:51.792 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.792 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:51.792 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.792 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.792 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.052 request: 00:19:52.052 { 00:19:52.052 "name": "nvme0", 00:19:52.052 "trtype": "tcp", 00:19:52.052 "traddr": "10.0.0.2", 00:19:52.052 "adrfam": "ipv4", 00:19:52.052 "trsvcid": "4420", 00:19:52.052 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:52.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:52.052 "prchk_reftag": false, 00:19:52.052 "prchk_guard": false, 00:19:52.052 "hdgst": false, 00:19:52.052 "ddgst": false, 00:19:52.052 "dhchap_key": "key3", 00:19:52.052 "method": "bdev_nvme_attach_controller", 00:19:52.052 "req_id": 1 00:19:52.052 } 00:19:52.052 Got JSON-RPC error response 00:19:52.052 response: 00:19:52.052 { 00:19:52.052 "code": -5, 00:19:52.052 "message": "Input/output error" 00:19:52.052 } 00:19:52.052 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:52.052 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:52.052 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:52.052 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:52.052 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:52.052 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:52.052 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:52.052 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:52.052 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.052 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:52.052 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.052 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:52.052 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:52.052 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:52.052 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:52.052 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.052 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.313 request: 00:19:52.313 { 00:19:52.313 "name": "nvme0", 00:19:52.313 "trtype": "tcp", 00:19:52.313 "traddr": "10.0.0.2", 00:19:52.313 "adrfam": "ipv4", 00:19:52.313 "trsvcid": "4420", 00:19:52.313 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:52.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:52.313 "prchk_reftag": false, 00:19:52.313 "prchk_guard": false, 00:19:52.313 "hdgst": false, 00:19:52.313 "ddgst": false, 00:19:52.313 "dhchap_key": "key3", 00:19:52.313 "method": "bdev_nvme_attach_controller", 00:19:52.313 "req_id": 1 00:19:52.313 } 00:19:52.313 Got JSON-RPC error response 00:19:52.313 response: 00:19:52.313 { 00:19:52.313 "code": -5, 00:19:52.313 "message": "Input/output error" 00:19:52.313 } 00:19:52.313 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:52.313 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:52.313 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:52.313 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:52.313 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:52.313 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:52.313 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:52.313 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:52.313 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:52.313 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:52.574 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:52.574 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.574 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.574 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.574 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:52.574 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.574 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.574 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.574 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:52.574 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:52.574 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:52.574 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:52.574 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:52.574 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:52.575 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:52.575 13:50:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:52.575 13:50:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:52.575 request: 00:19:52.575 { 00:19:52.575 "name": "nvme0", 00:19:52.575 "trtype": "tcp", 00:19:52.575 "traddr": "10.0.0.2", 00:19:52.575 "adrfam": "ipv4", 00:19:52.575 "trsvcid": "4420", 00:19:52.575 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:52.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:52.575 "prchk_reftag": false, 00:19:52.575 "prchk_guard": false, 00:19:52.575 "hdgst": false, 00:19:52.575 "ddgst": false, 00:19:52.575 "dhchap_key": "key0", 00:19:52.575 "dhchap_ctrlr_key": "key1", 00:19:52.575 "method": "bdev_nvme_attach_controller", 00:19:52.575 "req_id": 1 00:19:52.575 } 00:19:52.575 Got JSON-RPC error response 00:19:52.575 response: 00:19:52.575 { 00:19:52.575 "code": -5, 00:19:52.575 "message": "Input/output error" 00:19:52.575 } 00:19:52.575 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:52.575 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:52.575 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:52.575 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:52.575 13:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:52.575 13:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:52.836 00:19:52.836 13:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:52.836 13:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:52.836 13:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.097 13:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.097 13:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.097 13:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.097 13:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:53.097 13:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:53.097 13:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1082040 00:19:53.097 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1082040 ']' 00:19:53.097 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1082040 00:19:53.097 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:53.097 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:53.097 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1082040 00:19:53.357 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:53.357 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:53.357 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1082040' 00:19:53.357 killing process with pid 1082040 00:19:53.357 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1082040 00:19:53.357 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1082040 00:19:53.357 13:50:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:53.357 13:50:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:53.357 13:50:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:53.357 13:50:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:53.357 13:50:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:53.357 13:50:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:53.357 13:50:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:53.357 rmmod nvme_tcp 00:19:53.357 rmmod nvme_fabrics 00:19:53.617 rmmod nvme_keyring 00:19:53.617 13:50:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.617 13:50:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:53.617 13:50:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:53.617 13:50:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1108571 ']' 00:19:53.617 13:50:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1108571 00:19:53.617 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1108571 ']' 00:19:53.617 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1108571 00:19:53.617 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:53.617 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:53.617 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1108571 00:19:53.617 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:53.617 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:53.617 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1108571' 00:19:53.617 killing process with pid 1108571 00:19:53.617 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1108571 00:19:53.617 13:50:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1108571 00:19:53.617 13:50:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:53.617 13:50:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:53.617 13:50:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:53.617 13:50:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.617 13:50:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:53.618 13:50:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.618 13:50:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.618 13:50:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.163 13:50:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:56.163 13:50:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.n3x /tmp/spdk.key-sha256.Bob /tmp/spdk.key-sha384.vBT /tmp/spdk.key-sha512.LOp /tmp/spdk.key-sha512.Jfh /tmp/spdk.key-sha384.yKK /tmp/spdk.key-sha256.C5s '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:56.163 00:19:56.163 real 2m23.984s 00:19:56.163 user 5m20.585s 00:19:56.163 sys 0m21.271s 00:19:56.163 13:50:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:56.163 13:50:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.163 ************************************ 00:19:56.163 END TEST nvmf_auth_target 00:19:56.163 ************************************ 00:19:56.163 13:50:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:56.163 13:50:22 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:56.163 13:50:22 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:56.163 13:50:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:56.163 13:50:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:56.163 13:50:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:56.163 ************************************ 00:19:56.163 START TEST nvmf_bdevio_no_huge 00:19:56.163 ************************************ 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:56.163 * Looking for test storage... 00:19:56.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:56.163 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:56.164 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:56.164 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:56.164 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:56.164 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:56.164 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.164 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:56.164 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:56.164 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:56.164 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.164 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.164 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.164 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:56.164 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:56.164 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:56.164 13:50:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:02.891 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:02.891 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:02.891 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:02.891 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:02.891 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.892 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.892 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:02.892 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.892 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.892 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:02.892 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:02.892 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.892 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:03.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:20:03.152 00:20:03.152 --- 10.0.0.2 ping statistics --- 00:20:03.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.152 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:03.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:20:03.152 00:20:03.152 --- 10.0.0.1 ping statistics --- 00:20:03.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.152 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1113826 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1113826 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1113826 ']' 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.152 13:50:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.413 [2024-07-15 13:50:29.710023] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:03.413 [2024-07-15 13:50:29.710094] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:03.413 [2024-07-15 13:50:29.803090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.413 [2024-07-15 13:50:29.911113] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.413 [2024-07-15 13:50:29.911181] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.413 [2024-07-15 13:50:29.911189] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.413 [2024-07-15 13:50:29.911196] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.413 [2024-07-15 13:50:29.911202] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.413 [2024-07-15 13:50:29.911398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:03.413 [2024-07-15 13:50:29.911556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:03.413 [2024-07-15 13:50:29.911715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.413 [2024-07-15 13:50:29.911715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:04.354 [2024-07-15 13:50:30.562854] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:04.354 Malloc0 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:04.354 [2024-07-15 13:50:30.616553] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.354 { 00:20:04.354 "params": { 00:20:04.354 "name": "Nvme$subsystem", 00:20:04.354 "trtype": "$TEST_TRANSPORT", 00:20:04.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.354 "adrfam": "ipv4", 00:20:04.354 "trsvcid": "$NVMF_PORT", 00:20:04.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.354 "hdgst": ${hdgst:-false}, 00:20:04.354 "ddgst": ${ddgst:-false} 00:20:04.354 }, 00:20:04.354 "method": "bdev_nvme_attach_controller" 00:20:04.354 } 00:20:04.354 EOF 00:20:04.354 )") 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:04.354 13:50:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:04.354 "params": { 00:20:04.354 "name": "Nvme1", 00:20:04.354 "trtype": "tcp", 00:20:04.354 "traddr": "10.0.0.2", 00:20:04.354 "adrfam": "ipv4", 00:20:04.354 "trsvcid": "4420", 00:20:04.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:04.354 "hdgst": false, 00:20:04.354 "ddgst": false 00:20:04.354 }, 00:20:04.354 "method": "bdev_nvme_attach_controller" 00:20:04.354 }' 00:20:04.354 [2024-07-15 13:50:30.674317] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:04.354 [2024-07-15 13:50:30.674388] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1113957 ] 00:20:04.354 [2024-07-15 13:50:30.744216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:04.354 [2024-07-15 13:50:30.841834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.354 [2024-07-15 13:50:30.841951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.354 [2024-07-15 13:50:30.841954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.623 I/O targets: 00:20:04.623 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:04.623 00:20:04.623 00:20:04.623 CUnit - A unit testing framework for C - Version 2.1-3 00:20:04.623 http://cunit.sourceforge.net/ 00:20:04.623 00:20:04.623 00:20:04.623 Suite: bdevio tests on: Nvme1n1 00:20:04.883 Test: blockdev write read block ...passed 00:20:04.883 Test: blockdev write zeroes read block ...passed 00:20:04.883 Test: blockdev write zeroes read no split ...passed 00:20:04.883 Test: blockdev write zeroes read split ...passed 00:20:04.883 Test: blockdev write zeroes read split partial ...passed 00:20:04.883 Test: blockdev reset ...[2024-07-15 13:50:31.254173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:04.883 [2024-07-15 13:50:31.254243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x573c10 (9): Bad file descriptor 00:20:04.883 [2024-07-15 13:50:31.265940] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:04.883 passed 00:20:04.883 Test: blockdev write read 8 blocks ...passed 00:20:04.883 Test: blockdev write read size > 128k ...passed 00:20:04.883 Test: blockdev write read invalid size ...passed 00:20:04.883 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:04.883 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:04.883 Test: blockdev write read max offset ...passed 00:20:04.883 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:05.144 Test: blockdev writev readv 8 blocks ...passed 00:20:05.144 Test: blockdev writev readv 30 x 1block ...passed 00:20:05.144 Test: blockdev writev readv block ...passed 00:20:05.144 Test: blockdev writev readv size > 128k ...passed 00:20:05.144 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:05.144 Test: blockdev comparev and writev ...[2024-07-15 13:50:31.535239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.144 [2024-07-15 13:50:31.535264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.144 [2024-07-15 13:50:31.535275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.144 [2024-07-15 13:50:31.535281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:05.144 [2024-07-15 13:50:31.535840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.144 [2024-07-15 13:50:31.535850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:05.144 [2024-07-15 13:50:31.535859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.144 [2024-07-15 13:50:31.535865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:05.144 [2024-07-15 13:50:31.536421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.144 [2024-07-15 13:50:31.536430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:05.144 [2024-07-15 13:50:31.536439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.144 [2024-07-15 13:50:31.536444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:05.144 [2024-07-15 13:50:31.537000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.144 [2024-07-15 13:50:31.537008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:05.144 [2024-07-15 13:50:31.537017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:05.144 [2024-07-15 13:50:31.537022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:05.144 passed 00:20:05.144 Test: blockdev nvme passthru rw ...passed 00:20:05.144 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:50:31.622201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.144 [2024-07-15 13:50:31.622211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:05.144 [2024-07-15 13:50:31.622623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.144 [2024-07-15 13:50:31.622630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:05.144 [2024-07-15 13:50:31.623043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.144 [2024-07-15 13:50:31.623051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:05.144 [2024-07-15 13:50:31.623476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.144 [2024-07-15 13:50:31.623484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:05.144 passed 00:20:05.144 Test: blockdev nvme admin passthru ...passed 00:20:05.404 Test: blockdev copy ...passed 00:20:05.404 00:20:05.404 Run Summary: Type Total Ran Passed Failed Inactive 00:20:05.404 suites 1 1 n/a 0 0 00:20:05.404 tests 23 23 23 0 0 00:20:05.404 asserts 152 152 152 0 n/a 00:20:05.404 00:20:05.404 Elapsed time = 1.151 seconds 00:20:05.665 13:50:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:05.665 13:50:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.665 13:50:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.665 13:50:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.665 13:50:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:05.665 13:50:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:05.665 13:50:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:05.665 13:50:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:05.665 13:50:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:05.665 13:50:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:05.665 13:50:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:05.665 13:50:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:05.665 rmmod nvme_tcp 00:20:05.665 rmmod nvme_fabrics 00:20:05.665 rmmod nvme_keyring 00:20:05.665 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:05.665 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:05.665 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:05.665 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1113826 ']' 00:20:05.665 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1113826 00:20:05.665 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1113826 ']' 00:20:05.665 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1113826 00:20:05.666 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:05.666 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:05.666 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1113826 00:20:05.666 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:05.666 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:05.666 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1113826' 00:20:05.666 killing process with pid 1113826 00:20:05.666 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1113826 00:20:05.666 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1113826 00:20:06.237 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:06.237 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:06.237 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:06.237 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:06.237 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:06.237 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.237 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.237 13:50:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.148 13:50:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:08.148 00:20:08.148 real 0m12.244s 00:20:08.148 user 0m14.172s 00:20:08.148 sys 0m6.351s 00:20:08.148 13:50:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:08.148 13:50:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:08.148 ************************************ 00:20:08.148 END TEST nvmf_bdevio_no_huge 00:20:08.148 ************************************ 00:20:08.148 13:50:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:08.148 13:50:34 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:08.149 13:50:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:08.149 13:50:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:08.149 13:50:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:08.149 ************************************ 00:20:08.149 START TEST nvmf_tls 00:20:08.149 ************************************ 00:20:08.149 13:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:08.428 * Looking for test storage... 00:20:08.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.428 13:50:34 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:08.429 13:50:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:16.567 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:16.567 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:16.567 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:16.567 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:16.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:20:16.567 00:20:16.567 --- 10.0.0.2 ping statistics --- 00:20:16.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.567 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:16.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:20:16.567 00:20:16.567 --- 10.0.0.1 ping statistics --- 00:20:16.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.567 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:16.567 13:50:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:16.567 13:50:42 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:16.567 13:50:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:16.567 13:50:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:16.567 13:50:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.568 13:50:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1118517 00:20:16.568 13:50:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1118517 00:20:16.568 13:50:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:16.568 13:50:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1118517 ']' 00:20:16.568 13:50:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.568 13:50:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:16.568 13:50:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.568 13:50:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:16.568 13:50:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.568 [2024-07-15 13:50:42.088892] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:16.568 [2024-07-15 13:50:42.088956] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.568 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.568 [2024-07-15 13:50:42.179626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.568 [2024-07-15 13:50:42.273035] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.568 [2024-07-15 13:50:42.273096] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.568 [2024-07-15 13:50:42.273104] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.568 [2024-07-15 13:50:42.273111] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.568 [2024-07-15 13:50:42.273118] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.568 [2024-07-15 13:50:42.273166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.568 13:50:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:16.568 13:50:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:16.568 13:50:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:16.568 13:50:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:16.568 13:50:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.568 13:50:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.568 13:50:42 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:16.568 13:50:42 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:16.568 true 00:20:16.568 13:50:43 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:16.568 13:50:43 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:16.829 13:50:43 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:16.829 13:50:43 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:16.829 13:50:43 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:17.090 13:50:43 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:17.090 13:50:43 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:17.090 13:50:43 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:17.090 13:50:43 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:17.090 13:50:43 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:17.351 13:50:43 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:17.351 13:50:43 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:17.612 13:50:43 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:17.612 13:50:43 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:17.612 13:50:43 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:17.612 13:50:43 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:17.612 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:17.612 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:17.612 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:17.872 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:17.872 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:18.133 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:18.133 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:18.133 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:18.133 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:18.133 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.lRKcInaeAw 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.4VatYu3R69 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.lRKcInaeAw 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.4VatYu3R69 00:20:18.395 13:50:44 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:18.655 13:50:45 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:18.915 13:50:45 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.lRKcInaeAw 00:20:18.916 13:50:45 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lRKcInaeAw 00:20:18.916 13:50:45 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:18.916 [2024-07-15 13:50:45.376914] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.916 13:50:45 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:19.176 13:50:45 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:19.176 [2024-07-15 13:50:45.685662] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:19.176 [2024-07-15 13:50:45.685864] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.437 13:50:45 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:19.437 malloc0 00:20:19.437 13:50:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:19.698 13:50:46 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lRKcInaeAw 00:20:19.698 [2024-07-15 13:50:46.164782] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:19.698 13:50:46 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.lRKcInaeAw 00:20:19.698 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.949 Initializing NVMe Controllers 00:20:31.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:31.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:31.949 Initialization complete. Launching workers. 00:20:31.949 ======================================================== 00:20:31.949 Latency(us) 00:20:31.949 Device Information : IOPS MiB/s Average min max 00:20:31.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19094.48 74.59 3351.80 1175.35 6285.91 00:20:31.949 ======================================================== 00:20:31.949 Total : 19094.48 74.59 3351.80 1175.35 6285.91 00:20:31.949 00:20:31.949 13:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lRKcInaeAw 00:20:31.949 13:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:31.949 13:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:31.949 13:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:31.949 13:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lRKcInaeAw' 00:20:31.949 13:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:31.949 13:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1121348 00:20:31.949 13:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:31.949 13:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1121348 /var/tmp/bdevperf.sock 00:20:31.949 13:50:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:31.949 13:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1121348 ']' 00:20:31.949 13:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.949 13:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:31.949 13:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.949 13:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:31.949 13:50:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.949 [2024-07-15 13:50:56.319006] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:31.949 [2024-07-15 13:50:56.319061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1121348 ] 00:20:31.949 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.949 [2024-07-15 13:50:56.367533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.949 [2024-07-15 13:50:56.419639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.949 13:50:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:31.949 13:50:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:31.949 13:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lRKcInaeAw 00:20:31.949 [2024-07-15 13:50:57.236423] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.949 [2024-07-15 13:50:57.236479] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:31.949 TLSTESTn1 00:20:31.949 13:50:57 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:31.949 Running I/O for 10 seconds... 00:20:41.948 00:20:41.948 Latency(us) 00:20:41.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.948 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:41.948 Verification LBA range: start 0x0 length 0x2000 00:20:41.948 TLSTESTn1 : 10.06 2816.12 11.00 0.00 0.00 45324.74 4751.36 136314.88 00:20:41.948 =================================================================================================================== 00:20:41.948 Total : 2816.12 11.00 0.00 0.00 45324.74 4751.36 136314.88 00:20:41.948 0 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1121348 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1121348 ']' 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1121348 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1121348 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1121348' 00:20:41.948 killing process with pid 1121348 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1121348 00:20:41.948 Received shutdown signal, test time was about 10.000000 seconds 00:20:41.948 00:20:41.948 Latency(us) 00:20:41.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.948 =================================================================================================================== 00:20:41.948 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:41.948 [2024-07-15 13:51:07.581873] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1121348 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4VatYu3R69 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4VatYu3R69 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4VatYu3R69 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.4VatYu3R69' 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1123548 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1123548 /var/tmp/bdevperf.sock 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1123548 ']' 00:20:41.948 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.949 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:41.949 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.949 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:41.949 13:51:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.949 [2024-07-15 13:51:07.756748] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:41.949 [2024-07-15 13:51:07.756804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123548 ] 00:20:41.949 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.949 [2024-07-15 13:51:07.805333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.949 [2024-07-15 13:51:07.857587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.208 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:42.208 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:42.208 13:51:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4VatYu3R69 00:20:42.208 [2024-07-15 13:51:08.650151] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:42.208 [2024-07-15 13:51:08.650207] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:42.208 [2024-07-15 13:51:08.658649] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:42.208 [2024-07-15 13:51:08.659361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f14ec0 (107): Transport endpoint is not connected 00:20:42.208 [2024-07-15 13:51:08.660357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f14ec0 (9): Bad file descriptor 00:20:42.208 [2024-07-15 13:51:08.661359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:42.208 [2024-07-15 13:51:08.661367] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:42.208 [2024-07-15 13:51:08.661374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:42.208 request: 00:20:42.208 { 00:20:42.208 "name": "TLSTEST", 00:20:42.208 "trtype": "tcp", 00:20:42.208 "traddr": "10.0.0.2", 00:20:42.208 "adrfam": "ipv4", 00:20:42.208 "trsvcid": "4420", 00:20:42.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.208 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:42.208 "prchk_reftag": false, 00:20:42.208 "prchk_guard": false, 00:20:42.208 "hdgst": false, 00:20:42.208 "ddgst": false, 00:20:42.208 "psk": "/tmp/tmp.4VatYu3R69", 00:20:42.208 "method": "bdev_nvme_attach_controller", 00:20:42.208 "req_id": 1 00:20:42.208 } 00:20:42.208 Got JSON-RPC error response 00:20:42.208 response: 00:20:42.208 { 00:20:42.208 "code": -5, 00:20:42.208 "message": "Input/output error" 00:20:42.208 } 00:20:42.209 13:51:08 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1123548 00:20:42.209 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1123548 ']' 00:20:42.209 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1123548 00:20:42.209 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:42.209 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.209 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1123548 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1123548' 00:20:42.469 killing process with pid 1123548 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1123548 00:20:42.469 Received shutdown signal, test time was about 10.000000 seconds 00:20:42.469 00:20:42.469 Latency(us) 00:20:42.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.469 =================================================================================================================== 00:20:42.469 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:42.469 [2024-07-15 13:51:08.748054] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1123548 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.lRKcInaeAw 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.lRKcInaeAw 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.lRKcInaeAw 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lRKcInaeAw' 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1123817 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1123817 /var/tmp/bdevperf.sock 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1123817 ']' 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:42.469 13:51:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.469 [2024-07-15 13:51:08.904561] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:42.469 [2024-07-15 13:51:08.904614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123817 ] 00:20:42.469 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.469 [2024-07-15 13:51:08.954201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.729 [2024-07-15 13:51:09.005531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.299 13:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:43.299 13:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:43.299 13:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.lRKcInaeAw 00:20:43.299 [2024-07-15 13:51:09.818273] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:43.299 [2024-07-15 13:51:09.818335] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:43.299 [2024-07-15 13:51:09.822727] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:43.299 [2024-07-15 13:51:09.822748] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:43.299 [2024-07-15 13:51:09.822769] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:43.299 [2024-07-15 13:51:09.823454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1912ec0 (107): Transport endpoint is not connected 00:20:43.299 [2024-07-15 13:51:09.824448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1912ec0 (9): Bad file descriptor 00:20:43.560 [2024-07-15 13:51:09.825453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:43.560 [2024-07-15 13:51:09.825462] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:43.560 [2024-07-15 13:51:09.825470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:43.560 request: 00:20:43.560 { 00:20:43.560 "name": "TLSTEST", 00:20:43.560 "trtype": "tcp", 00:20:43.560 "traddr": "10.0.0.2", 00:20:43.560 "adrfam": "ipv4", 00:20:43.560 "trsvcid": "4420", 00:20:43.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.560 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:43.560 "prchk_reftag": false, 00:20:43.560 "prchk_guard": false, 00:20:43.560 "hdgst": false, 00:20:43.560 "ddgst": false, 00:20:43.560 "psk": "/tmp/tmp.lRKcInaeAw", 00:20:43.560 "method": "bdev_nvme_attach_controller", 00:20:43.560 "req_id": 1 00:20:43.560 } 00:20:43.560 Got JSON-RPC error response 00:20:43.560 response: 00:20:43.560 { 00:20:43.560 "code": -5, 00:20:43.560 "message": "Input/output error" 00:20:43.560 } 00:20:43.560 13:51:09 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1123817 00:20:43.560 13:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1123817 ']' 00:20:43.560 13:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1123817 00:20:43.560 13:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:43.560 13:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.560 13:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1123817 00:20:43.560 13:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:43.560 13:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:43.560 13:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1123817' 00:20:43.560 killing process with pid 1123817 00:20:43.560 13:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1123817 00:20:43.560 Received shutdown signal, test time was about 10.000000 seconds 00:20:43.560 00:20:43.560 Latency(us) 00:20:43.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.560 =================================================================================================================== 00:20:43.560 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:43.560 [2024-07-15 13:51:09.909058] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:43.560 13:51:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1123817 00:20:43.560 13:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:43.560 13:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:43.560 13:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:43.560 13:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:43.560 13:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:43.560 13:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.lRKcInaeAw 00:20:43.560 13:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:43.560 13:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.lRKcInaeAw 00:20:43.560 13:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:43.560 13:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:43.560 13:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:43.560 13:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:43.560 13:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.lRKcInaeAw 00:20:43.560 13:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:43.560 13:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:43.560 13:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:43.560 13:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lRKcInaeAw' 00:20:43.560 13:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.561 13:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1124391 00:20:43.561 13:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.561 13:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1124391 /var/tmp/bdevperf.sock 00:20:43.561 13:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.561 13:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1124391 ']' 00:20:43.561 13:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.561 13:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.561 13:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.561 13:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.561 13:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.561 [2024-07-15 13:51:10.067791] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:43.561 [2024-07-15 13:51:10.067846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1124391 ] 00:20:43.821 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.821 [2024-07-15 13:51:10.119693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.821 [2024-07-15 13:51:10.173069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.392 13:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.392 13:51:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:44.392 13:51:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lRKcInaeAw 00:20:44.653 [2024-07-15 13:51:10.985758] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.653 [2024-07-15 13:51:10.985825] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:44.653 [2024-07-15 13:51:10.994273] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:44.653 [2024-07-15 13:51:10.994295] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:44.653 [2024-07-15 13:51:10.994316] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:44.653 [2024-07-15 13:51:10.994858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1562ec0 (107): Transport endpoint is not connected 00:20:44.653 [2024-07-15 13:51:10.995854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1562ec0 (9): Bad file descriptor 00:20:44.653 [2024-07-15 13:51:10.996856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:44.653 [2024-07-15 13:51:10.996865] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:44.653 [2024-07-15 13:51:10.996872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:44.653 request: 00:20:44.653 { 00:20:44.653 "name": "TLSTEST", 00:20:44.653 "trtype": "tcp", 00:20:44.653 "traddr": "10.0.0.2", 00:20:44.653 "adrfam": "ipv4", 00:20:44.653 "trsvcid": "4420", 00:20:44.653 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:44.653 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:44.653 "prchk_reftag": false, 00:20:44.653 "prchk_guard": false, 00:20:44.653 "hdgst": false, 00:20:44.653 "ddgst": false, 00:20:44.653 "psk": "/tmp/tmp.lRKcInaeAw", 00:20:44.653 "method": "bdev_nvme_attach_controller", 00:20:44.653 "req_id": 1 00:20:44.653 } 00:20:44.653 Got JSON-RPC error response 00:20:44.653 response: 00:20:44.653 { 00:20:44.653 "code": -5, 00:20:44.653 "message": "Input/output error" 00:20:44.654 } 00:20:44.654 13:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1124391 00:20:44.654 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1124391 ']' 00:20:44.654 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1124391 00:20:44.654 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:44.654 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:44.654 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1124391 00:20:44.654 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:44.654 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:44.654 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1124391' 00:20:44.654 killing process with pid 1124391 00:20:44.654 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1124391 00:20:44.654 Received shutdown signal, test time was about 10.000000 seconds 00:20:44.654 00:20:44.654 Latency(us) 00:20:44.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.654 =================================================================================================================== 00:20:44.654 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:44.654 [2024-07-15 13:51:11.082772] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:44.654 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1124391 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1124694 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1124694 /var/tmp/bdevperf.sock 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1124694 ']' 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.915 13:51:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.915 [2024-07-15 13:51:11.241564] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:44.915 [2024-07-15 13:51:11.241620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1124694 ] 00:20:44.915 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.915 [2024-07-15 13:51:11.291982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.915 [2024-07-15 13:51:11.344166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.856 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.856 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:45.857 [2024-07-15 13:51:12.157099] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:45.857 [2024-07-15 13:51:12.158646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3f4a0 (9): Bad file descriptor 00:20:45.857 [2024-07-15 13:51:12.159644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:45.857 [2024-07-15 13:51:12.159653] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:45.857 [2024-07-15 13:51:12.159660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:45.857 request: 00:20:45.857 { 00:20:45.857 "name": "TLSTEST", 00:20:45.857 "trtype": "tcp", 00:20:45.857 "traddr": "10.0.0.2", 00:20:45.857 "adrfam": "ipv4", 00:20:45.857 "trsvcid": "4420", 00:20:45.857 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.857 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.857 "prchk_reftag": false, 00:20:45.857 "prchk_guard": false, 00:20:45.857 "hdgst": false, 00:20:45.857 "ddgst": false, 00:20:45.857 "method": "bdev_nvme_attach_controller", 00:20:45.857 "req_id": 1 00:20:45.857 } 00:20:45.857 Got JSON-RPC error response 00:20:45.857 response: 00:20:45.857 { 00:20:45.857 "code": -5, 00:20:45.857 "message": "Input/output error" 00:20:45.857 } 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1124694 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1124694 ']' 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1124694 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1124694 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1124694' 00:20:45.857 killing process with pid 1124694 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1124694 00:20:45.857 Received shutdown signal, test time was about 10.000000 seconds 00:20:45.857 00:20:45.857 Latency(us) 00:20:45.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.857 =================================================================================================================== 00:20:45.857 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1124694 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1118517 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1118517 ']' 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1118517 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:45.857 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1118517 00:20:46.162 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:46.162 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:46.162 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1118517' 00:20:46.162 killing process with pid 1118517 00:20:46.162 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1118517 00:20:46.162 [2024-07-15 13:51:12.404151] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:46.162 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1118517 00:20:46.162 13:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:46.162 13:51:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:46.162 13:51:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:46.162 13:51:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:46.162 13:51:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:46.162 13:51:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:46.162 13:51:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:46.162 13:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:46.162 13:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:46.162 13:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.9e5d9DS3Pe 00:20:46.162 13:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:46.162 13:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.9e5d9DS3Pe 00:20:46.162 13:51:12 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:46.162 13:51:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:46.163 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:46.163 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.163 13:51:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1124987 00:20:46.163 13:51:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:46.163 13:51:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1124987 00:20:46.163 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1124987 ']' 00:20:46.163 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.163 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.163 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.163 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.163 13:51:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.163 [2024-07-15 13:51:12.628654] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:46.163 [2024-07-15 13:51:12.628730] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.478 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.478 [2024-07-15 13:51:12.712510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.478 [2024-07-15 13:51:12.767360] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.479 [2024-07-15 13:51:12.767395] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.479 [2024-07-15 13:51:12.767401] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.479 [2024-07-15 13:51:12.767406] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.479 [2024-07-15 13:51:12.767410] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.479 [2024-07-15 13:51:12.767431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.048 13:51:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:47.048 13:51:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:47.048 13:51:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:47.048 13:51:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:47.048 13:51:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.048 13:51:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.048 13:51:13 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.9e5d9DS3Pe 00:20:47.048 13:51:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9e5d9DS3Pe 00:20:47.048 13:51:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:47.048 [2024-07-15 13:51:13.573231] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.308 13:51:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:47.308 13:51:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:47.569 [2024-07-15 13:51:13.865935] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:47.569 [2024-07-15 13:51:13.866120] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.569 13:51:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:47.569 malloc0 00:20:47.569 13:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:47.829 13:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9e5d9DS3Pe 00:20:47.829 [2024-07-15 13:51:14.288723] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:47.829 13:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9e5d9DS3Pe 00:20:47.829 13:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:47.829 13:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:47.829 13:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:47.829 13:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9e5d9DS3Pe' 00:20:47.829 13:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:47.829 13:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:47.829 13:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1125351 00:20:47.829 13:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:47.829 13:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1125351 /var/tmp/bdevperf.sock 00:20:47.829 13:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1125351 ']' 00:20:47.829 13:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.829 13:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:47.829 13:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.829 13:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:47.829 13:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.829 [2024-07-15 13:51:14.336293] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:47.829 [2024-07-15 13:51:14.336345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125351 ] 00:20:48.090 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.090 [2024-07-15 13:51:14.387009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.090 [2024-07-15 13:51:14.439202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.090 13:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:48.090 13:51:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:48.090 13:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9e5d9DS3Pe 00:20:48.351 [2024-07-15 13:51:14.658726] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:48.351 [2024-07-15 13:51:14.658785] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:48.351 TLSTESTn1 00:20:48.351 13:51:14 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:48.351 Running I/O for 10 seconds... 00:21:00.580 00:21:00.580 Latency(us) 00:21:00.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.580 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:00.580 Verification LBA range: start 0x0 length 0x2000 00:21:00.580 TLSTESTn1 : 10.06 2633.60 10.29 0.00 0.00 48464.39 4778.67 100051.63 00:21:00.580 =================================================================================================================== 00:21:00.580 Total : 2633.60 10.29 0.00 0.00 48464.39 4778.67 100051.63 00:21:00.580 0 00:21:00.580 13:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:00.580 13:51:24 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1125351 00:21:00.580 13:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1125351 ']' 00:21:00.580 13:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1125351 00:21:00.580 13:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:00.580 13:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:00.580 13:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1125351 00:21:00.580 13:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:00.580 13:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:00.580 13:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1125351' 00:21:00.580 killing process with pid 1125351 00:21:00.580 13:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1125351 00:21:00.580 Received shutdown signal, test time was about 10.000000 seconds 00:21:00.580 00:21:00.580 Latency(us) 00:21:00.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.580 =================================================================================================================== 00:21:00.580 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:00.580 [2024-07-15 13:51:25.000586] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:00.580 13:51:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1125351 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.9e5d9DS3Pe 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9e5d9DS3Pe 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9e5d9DS3Pe 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9e5d9DS3Pe 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9e5d9DS3Pe' 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1127486 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1127486 /var/tmp/bdevperf.sock 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1127486 ']' 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.580 [2024-07-15 13:51:25.172138] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:00.580 [2024-07-15 13:51:25.172196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127486 ] 00:21:00.580 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.580 [2024-07-15 13:51:25.222578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.580 [2024-07-15 13:51:25.276999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:00.580 13:51:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9e5d9DS3Pe 00:21:00.580 [2024-07-15 13:51:26.077906] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.580 [2024-07-15 13:51:26.077953] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:00.580 [2024-07-15 13:51:26.077958] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.9e5d9DS3Pe 00:21:00.580 request: 00:21:00.580 { 00:21:00.580 "name": "TLSTEST", 00:21:00.580 "trtype": "tcp", 00:21:00.580 "traddr": "10.0.0.2", 00:21:00.580 "adrfam": "ipv4", 00:21:00.580 "trsvcid": "4420", 00:21:00.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.580 "prchk_reftag": false, 00:21:00.580 "prchk_guard": false, 00:21:00.580 "hdgst": false, 00:21:00.580 "ddgst": false, 00:21:00.580 "psk": "/tmp/tmp.9e5d9DS3Pe", 00:21:00.580 "method": "bdev_nvme_attach_controller", 00:21:00.580 "req_id": 1 00:21:00.580 } 00:21:00.580 Got JSON-RPC error response 00:21:00.580 response: 00:21:00.580 { 00:21:00.580 "code": -1, 00:21:00.580 "message": "Operation not permitted" 00:21:00.580 } 00:21:00.580 13:51:26 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1127486 00:21:00.580 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1127486 ']' 00:21:00.580 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1127486 00:21:00.580 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:00.580 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:00.580 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1127486 00:21:00.580 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:00.580 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:00.580 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1127486' 00:21:00.580 killing process with pid 1127486 00:21:00.580 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1127486 00:21:00.580 Received shutdown signal, test time was about 10.000000 seconds 00:21:00.581 00:21:00.581 Latency(us) 00:21:00.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.581 =================================================================================================================== 00:21:00.581 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1127486 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1124987 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1124987 ']' 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1124987 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1124987 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1124987' 00:21:00.581 killing process with pid 1124987 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1124987 00:21:00.581 [2024-07-15 13:51:26.321767] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1124987 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1127708 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1127708 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1127708 ']' 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:00.581 13:51:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.581 [2024-07-15 13:51:26.499195] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:00.581 [2024-07-15 13:51:26.499249] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.581 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.581 [2024-07-15 13:51:26.580344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.581 [2024-07-15 13:51:26.636158] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.581 [2024-07-15 13:51:26.636194] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.581 [2024-07-15 13:51:26.636199] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.581 [2024-07-15 13:51:26.636204] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.581 [2024-07-15 13:51:26.636208] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.581 [2024-07-15 13:51:26.636225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.843 13:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.843 13:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:00.843 13:51:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:00.843 13:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:00.843 13:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.843 13:51:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.843 13:51:27 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.9e5d9DS3Pe 00:21:00.843 13:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:00.843 13:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.9e5d9DS3Pe 00:21:00.843 13:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:00.843 13:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:00.843 13:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:00.843 13:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:00.843 13:51:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.9e5d9DS3Pe 00:21:00.843 13:51:27 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9e5d9DS3Pe 00:21:00.843 13:51:27 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:01.103 [2024-07-15 13:51:27.434840] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.103 13:51:27 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:01.103 13:51:27 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:01.363 [2024-07-15 13:51:27.727553] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:01.363 [2024-07-15 13:51:27.727754] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.363 13:51:27 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:01.363 malloc0 00:21:01.363 13:51:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:01.623 13:51:28 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9e5d9DS3Pe 00:21:01.884 [2024-07-15 13:51:28.162345] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:01.884 [2024-07-15 13:51:28.162362] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:01.884 [2024-07-15 13:51:28.162381] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:01.884 request: 00:21:01.884 { 00:21:01.884 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.884 "host": "nqn.2016-06.io.spdk:host1", 00:21:01.884 "psk": "/tmp/tmp.9e5d9DS3Pe", 00:21:01.884 "method": "nvmf_subsystem_add_host", 00:21:01.884 "req_id": 1 00:21:01.884 } 00:21:01.884 Got JSON-RPC error response 00:21:01.884 response: 00:21:01.884 { 00:21:01.884 "code": -32603, 00:21:01.884 "message": "Internal error" 00:21:01.884 } 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1127708 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1127708 ']' 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1127708 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1127708 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1127708' 00:21:01.884 killing process with pid 1127708 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1127708 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1127708 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.9e5d9DS3Pe 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1128106 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1128106 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1128106 ']' 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:01.884 13:51:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.169 [2024-07-15 13:51:28.412378] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:02.169 [2024-07-15 13:51:28.412436] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.169 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.169 [2024-07-15 13:51:28.495105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.169 [2024-07-15 13:51:28.553497] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.169 [2024-07-15 13:51:28.553531] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.169 [2024-07-15 13:51:28.553536] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.169 [2024-07-15 13:51:28.553541] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.169 [2024-07-15 13:51:28.553546] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.169 [2024-07-15 13:51:28.553567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.742 13:51:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.742 13:51:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:02.742 13:51:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:02.742 13:51:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:02.742 13:51:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.742 13:51:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.742 13:51:29 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.9e5d9DS3Pe 00:21:02.743 13:51:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9e5d9DS3Pe 00:21:02.743 13:51:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:03.003 [2024-07-15 13:51:29.352936] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.003 13:51:29 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:03.004 13:51:29 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:03.264 [2024-07-15 13:51:29.661677] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:03.264 [2024-07-15 13:51:29.661879] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.264 13:51:29 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:03.524 malloc0 00:21:03.524 13:51:29 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:03.524 13:51:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9e5d9DS3Pe 00:21:03.785 [2024-07-15 13:51:30.088549] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:03.785 13:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:03.785 13:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1128479 00:21:03.785 13:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:03.785 13:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1128479 /var/tmp/bdevperf.sock 00:21:03.785 13:51:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1128479 ']' 00:21:03.785 13:51:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.785 13:51:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:03.785 13:51:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.785 13:51:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:03.785 13:51:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.785 [2024-07-15 13:51:30.135618] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:03.785 [2024-07-15 13:51:30.135668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128479 ] 00:21:03.785 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.785 [2024-07-15 13:51:30.185446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.785 [2024-07-15 13:51:30.237482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.047 13:51:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:04.047 13:51:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:04.047 13:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9e5d9DS3Pe 00:21:04.047 [2024-07-15 13:51:30.457548] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:04.047 [2024-07-15 13:51:30.457612] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:04.047 TLSTESTn1 00:21:04.047 13:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:04.308 13:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:04.308 "subsystems": [ 00:21:04.308 { 00:21:04.308 "subsystem": "keyring", 00:21:04.308 "config": [] 00:21:04.308 }, 00:21:04.308 { 00:21:04.308 "subsystem": "iobuf", 00:21:04.308 "config": [ 00:21:04.308 { 00:21:04.308 "method": "iobuf_set_options", 00:21:04.308 "params": { 00:21:04.308 "small_pool_count": 8192, 00:21:04.308 "large_pool_count": 1024, 00:21:04.308 "small_bufsize": 8192, 00:21:04.308 "large_bufsize": 135168 00:21:04.308 } 00:21:04.308 } 00:21:04.308 ] 00:21:04.308 }, 00:21:04.308 { 00:21:04.308 "subsystem": "sock", 00:21:04.308 "config": [ 00:21:04.308 { 00:21:04.308 "method": "sock_set_default_impl", 00:21:04.308 "params": { 00:21:04.308 "impl_name": "posix" 00:21:04.308 } 00:21:04.308 }, 00:21:04.308 { 00:21:04.308 "method": "sock_impl_set_options", 00:21:04.308 "params": { 00:21:04.308 "impl_name": "ssl", 00:21:04.308 "recv_buf_size": 4096, 00:21:04.308 "send_buf_size": 4096, 00:21:04.308 "enable_recv_pipe": true, 00:21:04.308 "enable_quickack": false, 00:21:04.308 "enable_placement_id": 0, 00:21:04.308 "enable_zerocopy_send_server": true, 00:21:04.308 "enable_zerocopy_send_client": false, 00:21:04.308 "zerocopy_threshold": 0, 00:21:04.308 "tls_version": 0, 00:21:04.308 "enable_ktls": false 00:21:04.308 } 00:21:04.308 }, 00:21:04.308 { 00:21:04.308 "method": "sock_impl_set_options", 00:21:04.308 "params": { 00:21:04.308 "impl_name": "posix", 00:21:04.308 "recv_buf_size": 2097152, 00:21:04.308 "send_buf_size": 2097152, 00:21:04.308 "enable_recv_pipe": true, 00:21:04.308 "enable_quickack": false, 00:21:04.308 "enable_placement_id": 0, 00:21:04.308 "enable_zerocopy_send_server": true, 00:21:04.308 "enable_zerocopy_send_client": false, 00:21:04.308 "zerocopy_threshold": 0, 00:21:04.308 "tls_version": 0, 00:21:04.308 "enable_ktls": false 00:21:04.308 } 00:21:04.308 } 00:21:04.309 ] 00:21:04.309 }, 00:21:04.309 { 00:21:04.309 "subsystem": "vmd", 00:21:04.309 "config": [] 00:21:04.309 }, 00:21:04.309 { 00:21:04.309 "subsystem": "accel", 00:21:04.309 "config": [ 00:21:04.309 { 00:21:04.309 "method": "accel_set_options", 00:21:04.309 "params": { 00:21:04.309 "small_cache_size": 128, 00:21:04.309 "large_cache_size": 16, 00:21:04.309 "task_count": 2048, 00:21:04.309 "sequence_count": 2048, 00:21:04.309 "buf_count": 2048 00:21:04.309 } 00:21:04.309 } 00:21:04.309 ] 00:21:04.309 }, 00:21:04.309 { 00:21:04.309 "subsystem": "bdev", 00:21:04.309 "config": [ 00:21:04.309 { 00:21:04.309 "method": "bdev_set_options", 00:21:04.309 "params": { 00:21:04.309 "bdev_io_pool_size": 65535, 00:21:04.309 "bdev_io_cache_size": 256, 00:21:04.309 "bdev_auto_examine": true, 00:21:04.309 "iobuf_small_cache_size": 128, 00:21:04.309 "iobuf_large_cache_size": 16 00:21:04.309 } 00:21:04.309 }, 00:21:04.309 { 00:21:04.309 "method": "bdev_raid_set_options", 00:21:04.309 "params": { 00:21:04.309 "process_window_size_kb": 1024 00:21:04.309 } 00:21:04.309 }, 00:21:04.309 { 00:21:04.309 "method": "bdev_iscsi_set_options", 00:21:04.309 "params": { 00:21:04.309 "timeout_sec": 30 00:21:04.309 } 00:21:04.309 }, 00:21:04.309 { 00:21:04.309 "method": "bdev_nvme_set_options", 00:21:04.309 "params": { 00:21:04.309 "action_on_timeout": "none", 00:21:04.309 "timeout_us": 0, 00:21:04.309 "timeout_admin_us": 0, 00:21:04.309 "keep_alive_timeout_ms": 10000, 00:21:04.309 "arbitration_burst": 0, 00:21:04.309 "low_priority_weight": 0, 00:21:04.309 "medium_priority_weight": 0, 00:21:04.309 "high_priority_weight": 0, 00:21:04.309 "nvme_adminq_poll_period_us": 10000, 00:21:04.309 "nvme_ioq_poll_period_us": 0, 00:21:04.309 "io_queue_requests": 0, 00:21:04.309 "delay_cmd_submit": true, 00:21:04.309 "transport_retry_count": 4, 00:21:04.309 "bdev_retry_count": 3, 00:21:04.309 "transport_ack_timeout": 0, 00:21:04.309 "ctrlr_loss_timeout_sec": 0, 00:21:04.309 "reconnect_delay_sec": 0, 00:21:04.309 "fast_io_fail_timeout_sec": 0, 00:21:04.309 "disable_auto_failback": false, 00:21:04.309 "generate_uuids": false, 00:21:04.309 "transport_tos": 0, 00:21:04.309 "nvme_error_stat": false, 00:21:04.309 "rdma_srq_size": 0, 00:21:04.309 "io_path_stat": false, 00:21:04.309 "allow_accel_sequence": false, 00:21:04.309 "rdma_max_cq_size": 0, 00:21:04.309 "rdma_cm_event_timeout_ms": 0, 00:21:04.309 "dhchap_digests": [ 00:21:04.309 "sha256", 00:21:04.309 "sha384", 00:21:04.309 "sha512" 00:21:04.309 ], 00:21:04.309 "dhchap_dhgroups": [ 00:21:04.309 "null", 00:21:04.309 "ffdhe2048", 00:21:04.309 "ffdhe3072", 00:21:04.309 "ffdhe4096", 00:21:04.309 "ffdhe6144", 00:21:04.309 "ffdhe8192" 00:21:04.309 ] 00:21:04.309 } 00:21:04.309 }, 00:21:04.309 { 00:21:04.309 "method": "bdev_nvme_set_hotplug", 00:21:04.309 "params": { 00:21:04.309 "period_us": 100000, 00:21:04.309 "enable": false 00:21:04.309 } 00:21:04.309 }, 00:21:04.309 { 00:21:04.309 "method": "bdev_malloc_create", 00:21:04.309 "params": { 00:21:04.309 "name": "malloc0", 00:21:04.309 "num_blocks": 8192, 00:21:04.309 "block_size": 4096, 00:21:04.309 "physical_block_size": 4096, 00:21:04.309 "uuid": "d9dc5c7b-db23-4796-b462-477668c73847", 00:21:04.309 "optimal_io_boundary": 0 00:21:04.309 } 00:21:04.309 }, 00:21:04.309 { 00:21:04.309 "method": "bdev_wait_for_examine" 00:21:04.309 } 00:21:04.309 ] 00:21:04.309 }, 00:21:04.309 { 00:21:04.309 "subsystem": "nbd", 00:21:04.309 "config": [] 00:21:04.309 }, 00:21:04.309 { 00:21:04.309 "subsystem": "scheduler", 00:21:04.309 "config": [ 00:21:04.309 { 00:21:04.309 "method": "framework_set_scheduler", 00:21:04.309 "params": { 00:21:04.309 "name": "static" 00:21:04.309 } 00:21:04.309 } 00:21:04.309 ] 00:21:04.309 }, 00:21:04.309 { 00:21:04.309 "subsystem": "nvmf", 00:21:04.309 "config": [ 00:21:04.309 { 00:21:04.309 "method": "nvmf_set_config", 00:21:04.309 "params": { 00:21:04.309 "discovery_filter": "match_any", 00:21:04.309 "admin_cmd_passthru": { 00:21:04.309 "identify_ctrlr": false 00:21:04.309 } 00:21:04.309 } 00:21:04.309 }, 00:21:04.309 { 00:21:04.309 "method": "nvmf_set_max_subsystems", 00:21:04.309 "params": { 00:21:04.309 "max_subsystems": 1024 00:21:04.309 } 00:21:04.309 }, 00:21:04.309 { 00:21:04.309 "method": "nvmf_set_crdt", 00:21:04.309 "params": { 00:21:04.309 "crdt1": 0, 00:21:04.309 "crdt2": 0, 00:21:04.309 "crdt3": 0 00:21:04.309 } 00:21:04.309 }, 00:21:04.309 { 00:21:04.309 "method": "nvmf_create_transport", 00:21:04.309 "params": { 00:21:04.309 "trtype": "TCP", 00:21:04.309 "max_queue_depth": 128, 00:21:04.309 "max_io_qpairs_per_ctrlr": 127, 00:21:04.309 "in_capsule_data_size": 4096, 00:21:04.309 "max_io_size": 131072, 00:21:04.309 "io_unit_size": 131072, 00:21:04.309 "max_aq_depth": 128, 00:21:04.309 "num_shared_buffers": 511, 00:21:04.309 "buf_cache_size": 4294967295, 00:21:04.309 "dif_insert_or_strip": false, 00:21:04.309 "zcopy": false, 00:21:04.309 "c2h_success": false, 00:21:04.309 "sock_priority": 0, 00:21:04.309 "abort_timeout_sec": 1, 00:21:04.309 "ack_timeout": 0, 00:21:04.309 "data_wr_pool_size": 0 00:21:04.309 } 00:21:04.309 }, 00:21:04.309 { 00:21:04.309 "method": "nvmf_create_subsystem", 00:21:04.309 "params": { 00:21:04.309 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.309 "allow_any_host": false, 00:21:04.309 "serial_number": "SPDK00000000000001", 00:21:04.309 "model_number": "SPDK bdev Controller", 00:21:04.309 "max_namespaces": 10, 00:21:04.309 "min_cntlid": 1, 00:21:04.309 "max_cntlid": 65519, 00:21:04.309 "ana_reporting": false 00:21:04.309 } 00:21:04.309 }, 00:21:04.309 { 00:21:04.309 "method": "nvmf_subsystem_add_host", 00:21:04.309 "params": { 00:21:04.309 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.309 "host": "nqn.2016-06.io.spdk:host1", 00:21:04.309 "psk": "/tmp/tmp.9e5d9DS3Pe" 00:21:04.309 } 00:21:04.309 }, 00:21:04.309 { 00:21:04.309 "method": "nvmf_subsystem_add_ns", 00:21:04.309 "params": { 00:21:04.309 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.309 "namespace": { 00:21:04.309 "nsid": 1, 00:21:04.309 "bdev_name": "malloc0", 00:21:04.309 "nguid": "D9DC5C7BDB234796B462477668C73847", 00:21:04.309 "uuid": "d9dc5c7b-db23-4796-b462-477668c73847", 00:21:04.309 "no_auto_visible": false 00:21:04.309 } 00:21:04.309 } 00:21:04.309 }, 00:21:04.309 { 00:21:04.309 "method": "nvmf_subsystem_add_listener", 00:21:04.309 "params": { 00:21:04.309 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.309 "listen_address": { 00:21:04.309 "trtype": "TCP", 00:21:04.309 "adrfam": "IPv4", 00:21:04.309 "traddr": "10.0.0.2", 00:21:04.309 "trsvcid": "4420" 00:21:04.309 }, 00:21:04.309 "secure_channel": true 00:21:04.309 } 00:21:04.309 } 00:21:04.309 ] 00:21:04.309 } 00:21:04.309 ] 00:21:04.309 }' 00:21:04.309 13:51:30 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:04.570 13:51:31 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:04.570 "subsystems": [ 00:21:04.570 { 00:21:04.570 "subsystem": "keyring", 00:21:04.570 "config": [] 00:21:04.570 }, 00:21:04.570 { 00:21:04.570 "subsystem": "iobuf", 00:21:04.570 "config": [ 00:21:04.570 { 00:21:04.570 "method": "iobuf_set_options", 00:21:04.570 "params": { 00:21:04.570 "small_pool_count": 8192, 00:21:04.570 "large_pool_count": 1024, 00:21:04.570 "small_bufsize": 8192, 00:21:04.570 "large_bufsize": 135168 00:21:04.570 } 00:21:04.570 } 00:21:04.570 ] 00:21:04.570 }, 00:21:04.570 { 00:21:04.570 "subsystem": "sock", 00:21:04.570 "config": [ 00:21:04.570 { 00:21:04.570 "method": "sock_set_default_impl", 00:21:04.570 "params": { 00:21:04.570 "impl_name": "posix" 00:21:04.570 } 00:21:04.570 }, 00:21:04.570 { 00:21:04.570 "method": "sock_impl_set_options", 00:21:04.570 "params": { 00:21:04.570 "impl_name": "ssl", 00:21:04.570 "recv_buf_size": 4096, 00:21:04.570 "send_buf_size": 4096, 00:21:04.570 "enable_recv_pipe": true, 00:21:04.570 "enable_quickack": false, 00:21:04.570 "enable_placement_id": 0, 00:21:04.570 "enable_zerocopy_send_server": true, 00:21:04.570 "enable_zerocopy_send_client": false, 00:21:04.570 "zerocopy_threshold": 0, 00:21:04.570 "tls_version": 0, 00:21:04.570 "enable_ktls": false 00:21:04.570 } 00:21:04.570 }, 00:21:04.570 { 00:21:04.570 "method": "sock_impl_set_options", 00:21:04.570 "params": { 00:21:04.570 "impl_name": "posix", 00:21:04.570 "recv_buf_size": 2097152, 00:21:04.570 "send_buf_size": 2097152, 00:21:04.570 "enable_recv_pipe": true, 00:21:04.570 "enable_quickack": false, 00:21:04.570 "enable_placement_id": 0, 00:21:04.570 "enable_zerocopy_send_server": true, 00:21:04.570 "enable_zerocopy_send_client": false, 00:21:04.570 "zerocopy_threshold": 0, 00:21:04.570 "tls_version": 0, 00:21:04.570 "enable_ktls": false 00:21:04.570 } 00:21:04.570 } 00:21:04.570 ] 00:21:04.570 }, 00:21:04.570 { 00:21:04.570 "subsystem": "vmd", 00:21:04.570 "config": [] 00:21:04.570 }, 00:21:04.570 { 00:21:04.570 "subsystem": "accel", 00:21:04.570 "config": [ 00:21:04.570 { 00:21:04.570 "method": "accel_set_options", 00:21:04.570 "params": { 00:21:04.570 "small_cache_size": 128, 00:21:04.570 "large_cache_size": 16, 00:21:04.570 "task_count": 2048, 00:21:04.570 "sequence_count": 2048, 00:21:04.570 "buf_count": 2048 00:21:04.570 } 00:21:04.570 } 00:21:04.570 ] 00:21:04.570 }, 00:21:04.570 { 00:21:04.570 "subsystem": "bdev", 00:21:04.570 "config": [ 00:21:04.570 { 00:21:04.570 "method": "bdev_set_options", 00:21:04.570 "params": { 00:21:04.570 "bdev_io_pool_size": 65535, 00:21:04.570 "bdev_io_cache_size": 256, 00:21:04.570 "bdev_auto_examine": true, 00:21:04.570 "iobuf_small_cache_size": 128, 00:21:04.570 "iobuf_large_cache_size": 16 00:21:04.570 } 00:21:04.570 }, 00:21:04.570 { 00:21:04.570 "method": "bdev_raid_set_options", 00:21:04.570 "params": { 00:21:04.570 "process_window_size_kb": 1024 00:21:04.570 } 00:21:04.570 }, 00:21:04.570 { 00:21:04.570 "method": "bdev_iscsi_set_options", 00:21:04.570 "params": { 00:21:04.570 "timeout_sec": 30 00:21:04.570 } 00:21:04.570 }, 00:21:04.570 { 00:21:04.570 "method": "bdev_nvme_set_options", 00:21:04.570 "params": { 00:21:04.570 "action_on_timeout": "none", 00:21:04.570 "timeout_us": 0, 00:21:04.570 "timeout_admin_us": 0, 00:21:04.570 "keep_alive_timeout_ms": 10000, 00:21:04.570 "arbitration_burst": 0, 00:21:04.570 "low_priority_weight": 0, 00:21:04.570 "medium_priority_weight": 0, 00:21:04.570 "high_priority_weight": 0, 00:21:04.570 "nvme_adminq_poll_period_us": 10000, 00:21:04.570 "nvme_ioq_poll_period_us": 0, 00:21:04.570 "io_queue_requests": 512, 00:21:04.570 "delay_cmd_submit": true, 00:21:04.570 "transport_retry_count": 4, 00:21:04.570 "bdev_retry_count": 3, 00:21:04.570 "transport_ack_timeout": 0, 00:21:04.570 "ctrlr_loss_timeout_sec": 0, 00:21:04.570 "reconnect_delay_sec": 0, 00:21:04.570 "fast_io_fail_timeout_sec": 0, 00:21:04.570 "disable_auto_failback": false, 00:21:04.570 "generate_uuids": false, 00:21:04.570 "transport_tos": 0, 00:21:04.570 "nvme_error_stat": false, 00:21:04.570 "rdma_srq_size": 0, 00:21:04.570 "io_path_stat": false, 00:21:04.570 "allow_accel_sequence": false, 00:21:04.570 "rdma_max_cq_size": 0, 00:21:04.570 "rdma_cm_event_timeout_ms": 0, 00:21:04.570 "dhchap_digests": [ 00:21:04.570 "sha256", 00:21:04.570 "sha384", 00:21:04.570 "sha512" 00:21:04.570 ], 00:21:04.570 "dhchap_dhgroups": [ 00:21:04.570 "null", 00:21:04.570 "ffdhe2048", 00:21:04.570 "ffdhe3072", 00:21:04.570 "ffdhe4096", 00:21:04.570 "ffdhe6144", 00:21:04.570 "ffdhe8192" 00:21:04.570 ] 00:21:04.570 } 00:21:04.570 }, 00:21:04.570 { 00:21:04.570 "method": "bdev_nvme_attach_controller", 00:21:04.570 "params": { 00:21:04.570 "name": "TLSTEST", 00:21:04.570 "trtype": "TCP", 00:21:04.570 "adrfam": "IPv4", 00:21:04.570 "traddr": "10.0.0.2", 00:21:04.570 "trsvcid": "4420", 00:21:04.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.570 "prchk_reftag": false, 00:21:04.570 "prchk_guard": false, 00:21:04.570 "ctrlr_loss_timeout_sec": 0, 00:21:04.570 "reconnect_delay_sec": 0, 00:21:04.570 "fast_io_fail_timeout_sec": 0, 00:21:04.570 "psk": "/tmp/tmp.9e5d9DS3Pe", 00:21:04.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:04.570 "hdgst": false, 00:21:04.570 "ddgst": false 00:21:04.571 } 00:21:04.571 }, 00:21:04.571 { 00:21:04.571 "method": "bdev_nvme_set_hotplug", 00:21:04.571 "params": { 00:21:04.571 "period_us": 100000, 00:21:04.571 "enable": false 00:21:04.571 } 00:21:04.571 }, 00:21:04.571 { 00:21:04.571 "method": "bdev_wait_for_examine" 00:21:04.571 } 00:21:04.571 ] 00:21:04.571 }, 00:21:04.571 { 00:21:04.571 "subsystem": "nbd", 00:21:04.571 "config": [] 00:21:04.571 } 00:21:04.571 ] 00:21:04.571 }' 00:21:04.571 13:51:31 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1128479 00:21:04.571 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1128479 ']' 00:21:04.571 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1128479 00:21:04.571 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:04.571 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:04.571 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1128479 00:21:04.831 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:04.831 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:04.831 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1128479' 00:21:04.831 killing process with pid 1128479 00:21:04.831 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1128479 00:21:04.831 Received shutdown signal, test time was about 10.000000 seconds 00:21:04.831 00:21:04.831 Latency(us) 00:21:04.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.831 =================================================================================================================== 00:21:04.831 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:04.831 [2024-07-15 13:51:31.098619] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:04.831 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1128479 00:21:04.831 13:51:31 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1128106 00:21:04.831 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1128106 ']' 00:21:04.831 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1128106 00:21:04.831 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:04.831 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:04.831 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1128106 00:21:04.831 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:04.831 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:04.831 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1128106' 00:21:04.831 killing process with pid 1128106 00:21:04.831 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1128106 00:21:04.831 [2024-07-15 13:51:31.265530] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:04.831 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1128106 00:21:05.092 13:51:31 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:05.092 13:51:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:05.092 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:05.092 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.092 13:51:31 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:05.092 "subsystems": [ 00:21:05.092 { 00:21:05.092 "subsystem": "keyring", 00:21:05.092 "config": [] 00:21:05.092 }, 00:21:05.092 { 00:21:05.092 "subsystem": "iobuf", 00:21:05.092 "config": [ 00:21:05.092 { 00:21:05.092 "method": "iobuf_set_options", 00:21:05.092 "params": { 00:21:05.092 "small_pool_count": 8192, 00:21:05.093 "large_pool_count": 1024, 00:21:05.093 "small_bufsize": 8192, 00:21:05.093 "large_bufsize": 135168 00:21:05.093 } 00:21:05.093 } 00:21:05.093 ] 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "subsystem": "sock", 00:21:05.093 "config": [ 00:21:05.093 { 00:21:05.093 "method": "sock_set_default_impl", 00:21:05.093 "params": { 00:21:05.093 "impl_name": "posix" 00:21:05.093 } 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "method": "sock_impl_set_options", 00:21:05.093 "params": { 00:21:05.093 "impl_name": "ssl", 00:21:05.093 "recv_buf_size": 4096, 00:21:05.093 "send_buf_size": 4096, 00:21:05.093 "enable_recv_pipe": true, 00:21:05.093 "enable_quickack": false, 00:21:05.093 "enable_placement_id": 0, 00:21:05.093 "enable_zerocopy_send_server": true, 00:21:05.093 "enable_zerocopy_send_client": false, 00:21:05.093 "zerocopy_threshold": 0, 00:21:05.093 "tls_version": 0, 00:21:05.093 "enable_ktls": false 00:21:05.093 } 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "method": "sock_impl_set_options", 00:21:05.093 "params": { 00:21:05.093 "impl_name": "posix", 00:21:05.093 "recv_buf_size": 2097152, 00:21:05.093 "send_buf_size": 2097152, 00:21:05.093 "enable_recv_pipe": true, 00:21:05.093 "enable_quickack": false, 00:21:05.093 "enable_placement_id": 0, 00:21:05.093 "enable_zerocopy_send_server": true, 00:21:05.093 "enable_zerocopy_send_client": false, 00:21:05.093 "zerocopy_threshold": 0, 00:21:05.093 "tls_version": 0, 00:21:05.093 "enable_ktls": false 00:21:05.093 } 00:21:05.093 } 00:21:05.093 ] 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "subsystem": "vmd", 00:21:05.093 "config": [] 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "subsystem": "accel", 00:21:05.093 "config": [ 00:21:05.093 { 00:21:05.093 "method": "accel_set_options", 00:21:05.093 "params": { 00:21:05.093 "small_cache_size": 128, 00:21:05.093 "large_cache_size": 16, 00:21:05.093 "task_count": 2048, 00:21:05.093 "sequence_count": 2048, 00:21:05.093 "buf_count": 2048 00:21:05.093 } 00:21:05.093 } 00:21:05.093 ] 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "subsystem": "bdev", 00:21:05.093 "config": [ 00:21:05.093 { 00:21:05.093 "method": "bdev_set_options", 00:21:05.093 "params": { 00:21:05.093 "bdev_io_pool_size": 65535, 00:21:05.093 "bdev_io_cache_size": 256, 00:21:05.093 "bdev_auto_examine": true, 00:21:05.093 "iobuf_small_cache_size": 128, 00:21:05.093 "iobuf_large_cache_size": 16 00:21:05.093 } 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "method": "bdev_raid_set_options", 00:21:05.093 "params": { 00:21:05.093 "process_window_size_kb": 1024 00:21:05.093 } 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "method": "bdev_iscsi_set_options", 00:21:05.093 "params": { 00:21:05.093 "timeout_sec": 30 00:21:05.093 } 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "method": "bdev_nvme_set_options", 00:21:05.093 "params": { 00:21:05.093 "action_on_timeout": "none", 00:21:05.093 "timeout_us": 0, 00:21:05.093 "timeout_admin_us": 0, 00:21:05.093 "keep_alive_timeout_ms": 10000, 00:21:05.093 "arbitration_burst": 0, 00:21:05.093 "low_priority_weight": 0, 00:21:05.093 "medium_priority_weight": 0, 00:21:05.093 "high_priority_weight": 0, 00:21:05.093 "nvme_adminq_poll_period_us": 10000, 00:21:05.093 "nvme_ioq_poll_period_us": 0, 00:21:05.093 "io_queue_requests": 0, 00:21:05.093 "delay_cmd_submit": true, 00:21:05.093 "transport_retry_count": 4, 00:21:05.093 "bdev_retry_count": 3, 00:21:05.093 "transport_ack_timeout": 0, 00:21:05.093 "ctrlr_loss_timeout_sec": 0, 00:21:05.093 "reconnect_delay_sec": 0, 00:21:05.093 "fast_io_fail_timeout_sec": 0, 00:21:05.093 "disable_auto_failback": false, 00:21:05.093 "generate_uuids": false, 00:21:05.093 "transport_tos": 0, 00:21:05.093 "nvme_error_stat": false, 00:21:05.093 "rdma_srq_size": 0, 00:21:05.093 "io_path_stat": false, 00:21:05.093 "allow_accel_sequence": false, 00:21:05.093 "rdma_max_cq_size": 0, 00:21:05.093 "rdma_cm_event_timeout_ms": 0, 00:21:05.093 "dhchap_digests": [ 00:21:05.093 "sha256", 00:21:05.093 "sha384", 00:21:05.093 "sha512" 00:21:05.093 ], 00:21:05.093 "dhchap_dhgroups": [ 00:21:05.093 "null", 00:21:05.093 "ffdhe2048", 00:21:05.093 "ffdhe3072", 00:21:05.093 "ffdhe4096", 00:21:05.093 "ffdhe6144", 00:21:05.093 "ffdhe8192" 00:21:05.093 ] 00:21:05.093 } 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "method": "bdev_nvme_set_hotplug", 00:21:05.093 "params": { 00:21:05.093 "period_us": 100000, 00:21:05.093 "enable": false 00:21:05.093 } 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "method": "bdev_malloc_create", 00:21:05.093 "params": { 00:21:05.093 "name": "malloc0", 00:21:05.093 "num_blocks": 8192, 00:21:05.093 "block_size": 4096, 00:21:05.093 "physical_block_size": 4096, 00:21:05.093 "uuid": "d9dc5c7b-db23-4796-b462-477668c73847", 00:21:05.093 "optimal_io_boundary": 0 00:21:05.093 } 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "method": "bdev_wait_for_examine" 00:21:05.093 } 00:21:05.093 ] 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "subsystem": "nbd", 00:21:05.093 "config": [] 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "subsystem": "scheduler", 00:21:05.093 "config": [ 00:21:05.093 { 00:21:05.093 "method": "framework_set_scheduler", 00:21:05.093 "params": { 00:21:05.093 "name": "static" 00:21:05.093 } 00:21:05.093 } 00:21:05.093 ] 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "subsystem": "nvmf", 00:21:05.093 "config": [ 00:21:05.093 { 00:21:05.093 "method": "nvmf_set_config", 00:21:05.093 "params": { 00:21:05.093 "discovery_filter": "match_any", 00:21:05.093 "admin_cmd_passthru": { 00:21:05.093 "identify_ctrlr": false 00:21:05.093 } 00:21:05.093 } 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "method": "nvmf_set_max_subsystems", 00:21:05.093 "params": { 00:21:05.093 "max_subsystems": 1024 00:21:05.093 } 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "method": "nvmf_set_crdt", 00:21:05.093 "params": { 00:21:05.093 "crdt1": 0, 00:21:05.093 "crdt2": 0, 00:21:05.093 "crdt3": 0 00:21:05.093 } 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "method": "nvmf_create_transport", 00:21:05.093 "params": { 00:21:05.093 "trtype": "TCP", 00:21:05.093 "max_queue_depth": 128, 00:21:05.093 "max_io_qpairs_per_ctrlr": 127, 00:21:05.093 "in_capsule_data_size": 4096, 00:21:05.093 "max_io_size": 131072, 00:21:05.093 "io_unit_size": 131072, 00:21:05.093 "max_aq_depth": 128, 00:21:05.093 "num_shared_buffers": 511, 00:21:05.093 "buf_cache_size": 4294967295, 00:21:05.093 "dif_insert_or_strip": false, 00:21:05.093 "zcopy": false, 00:21:05.093 "c2h_success": false, 00:21:05.093 "sock_priority": 0, 00:21:05.093 "abort_timeout_sec": 1, 00:21:05.093 "ack_timeout": 0, 00:21:05.093 "data_wr_pool_size": 0 00:21:05.093 } 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "method": "nvmf_create_subsystem", 00:21:05.093 "params": { 00:21:05.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.093 "allow_any_host": false, 00:21:05.093 "serial_number": "SPDK00000000000001", 00:21:05.093 "model_number": "SPDK bdev Controller", 00:21:05.093 "max_namespaces": 10, 00:21:05.093 "min_cntlid": 1, 00:21:05.093 "max_cntlid": 65519, 00:21:05.093 "ana_reporting": false 00:21:05.093 } 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "method": "nvmf_subsystem_add_host", 00:21:05.093 "params": { 00:21:05.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.093 "host": "nqn.2016-06.io.spdk:host1", 00:21:05.093 "psk": "/tmp/tmp.9e5d9DS3Pe" 00:21:05.093 } 00:21:05.093 }, 00:21:05.093 { 00:21:05.093 "method": "nvmf_subsystem_add_ns", 00:21:05.094 "params": { 00:21:05.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.094 "namespace": { 00:21:05.094 "nsid": 1, 00:21:05.094 "bdev_name": "malloc0", 00:21:05.094 "nguid": "D9DC5C7BDB234796B462477668C73847", 00:21:05.094 "uuid": "d9dc5c7b-db23-4796-b462-477668c73847", 00:21:05.094 "no_auto_visible": false 00:21:05.094 } 00:21:05.094 } 00:21:05.094 }, 00:21:05.094 { 00:21:05.094 "method": "nvmf_subsystem_add_listener", 00:21:05.094 "params": { 00:21:05.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.094 "listen_address": { 00:21:05.094 "trtype": "TCP", 00:21:05.094 "adrfam": "IPv4", 00:21:05.094 "traddr": "10.0.0.2", 00:21:05.094 "trsvcid": "4420" 00:21:05.094 }, 00:21:05.094 "secure_channel": true 00:21:05.094 } 00:21:05.094 } 00:21:05.094 ] 00:21:05.094 } 00:21:05.094 ] 00:21:05.094 }' 00:21:05.094 13:51:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1128785 00:21:05.094 13:51:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:05.094 13:51:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1128785 00:21:05.094 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1128785 ']' 00:21:05.094 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.094 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.094 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.094 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.094 13:51:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.094 [2024-07-15 13:51:31.452296] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:05.094 [2024-07-15 13:51:31.452349] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.094 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.094 [2024-07-15 13:51:31.531143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.094 [2024-07-15 13:51:31.583984] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.094 [2024-07-15 13:51:31.584017] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.094 [2024-07-15 13:51:31.584023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.094 [2024-07-15 13:51:31.584027] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.094 [2024-07-15 13:51:31.584032] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.094 [2024-07-15 13:51:31.584077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.356 [2024-07-15 13:51:31.767196] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.356 [2024-07-15 13:51:31.783165] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:05.356 [2024-07-15 13:51:31.799208] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:05.356 [2024-07-15 13:51:31.808302] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.934 13:51:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.934 13:51:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:05.934 13:51:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:05.934 13:51:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:05.934 13:51:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.934 13:51:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.934 13:51:32 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1129054 00:21:05.934 13:51:32 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1129054 /var/tmp/bdevperf.sock 00:21:05.934 13:51:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1129054 ']' 00:21:05.934 13:51:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.934 13:51:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.934 13:51:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.934 13:51:32 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:05.934 13:51:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.934 13:51:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.934 13:51:32 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:05.934 "subsystems": [ 00:21:05.934 { 00:21:05.934 "subsystem": "keyring", 00:21:05.934 "config": [] 00:21:05.934 }, 00:21:05.934 { 00:21:05.934 "subsystem": "iobuf", 00:21:05.934 "config": [ 00:21:05.934 { 00:21:05.934 "method": "iobuf_set_options", 00:21:05.934 "params": { 00:21:05.934 "small_pool_count": 8192, 00:21:05.934 "large_pool_count": 1024, 00:21:05.934 "small_bufsize": 8192, 00:21:05.934 "large_bufsize": 135168 00:21:05.934 } 00:21:05.934 } 00:21:05.934 ] 00:21:05.934 }, 00:21:05.934 { 00:21:05.934 "subsystem": "sock", 00:21:05.934 "config": [ 00:21:05.934 { 00:21:05.934 "method": "sock_set_default_impl", 00:21:05.934 "params": { 00:21:05.934 "impl_name": "posix" 00:21:05.934 } 00:21:05.934 }, 00:21:05.934 { 00:21:05.934 "method": "sock_impl_set_options", 00:21:05.934 "params": { 00:21:05.934 "impl_name": "ssl", 00:21:05.934 "recv_buf_size": 4096, 00:21:05.934 "send_buf_size": 4096, 00:21:05.934 "enable_recv_pipe": true, 00:21:05.934 "enable_quickack": false, 00:21:05.934 "enable_placement_id": 0, 00:21:05.934 "enable_zerocopy_send_server": true, 00:21:05.934 "enable_zerocopy_send_client": false, 00:21:05.934 "zerocopy_threshold": 0, 00:21:05.934 "tls_version": 0, 00:21:05.934 "enable_ktls": false 00:21:05.934 } 00:21:05.934 }, 00:21:05.934 { 00:21:05.934 "method": "sock_impl_set_options", 00:21:05.934 "params": { 00:21:05.934 "impl_name": "posix", 00:21:05.934 "recv_buf_size": 2097152, 00:21:05.934 "send_buf_size": 2097152, 00:21:05.934 "enable_recv_pipe": true, 00:21:05.934 "enable_quickack": false, 00:21:05.934 "enable_placement_id": 0, 00:21:05.934 "enable_zerocopy_send_server": true, 00:21:05.934 "enable_zerocopy_send_client": false, 00:21:05.934 "zerocopy_threshold": 0, 00:21:05.934 "tls_version": 0, 00:21:05.934 "enable_ktls": false 00:21:05.934 } 00:21:05.934 } 00:21:05.934 ] 00:21:05.934 }, 00:21:05.935 { 00:21:05.935 "subsystem": "vmd", 00:21:05.935 "config": [] 00:21:05.935 }, 00:21:05.935 { 00:21:05.935 "subsystem": "accel", 00:21:05.935 "config": [ 00:21:05.935 { 00:21:05.935 "method": "accel_set_options", 00:21:05.935 "params": { 00:21:05.935 "small_cache_size": 128, 00:21:05.935 "large_cache_size": 16, 00:21:05.935 "task_count": 2048, 00:21:05.935 "sequence_count": 2048, 00:21:05.935 "buf_count": 2048 00:21:05.935 } 00:21:05.935 } 00:21:05.935 ] 00:21:05.935 }, 00:21:05.935 { 00:21:05.935 "subsystem": "bdev", 00:21:05.935 "config": [ 00:21:05.935 { 00:21:05.935 "method": "bdev_set_options", 00:21:05.935 "params": { 00:21:05.935 "bdev_io_pool_size": 65535, 00:21:05.935 "bdev_io_cache_size": 256, 00:21:05.935 "bdev_auto_examine": true, 00:21:05.935 "iobuf_small_cache_size": 128, 00:21:05.935 "iobuf_large_cache_size": 16 00:21:05.935 } 00:21:05.935 }, 00:21:05.935 { 00:21:05.935 "method": "bdev_raid_set_options", 00:21:05.935 "params": { 00:21:05.935 "process_window_size_kb": 1024 00:21:05.935 } 00:21:05.935 }, 00:21:05.935 { 00:21:05.935 "method": "bdev_iscsi_set_options", 00:21:05.935 "params": { 00:21:05.935 "timeout_sec": 30 00:21:05.935 } 00:21:05.935 }, 00:21:05.935 { 00:21:05.935 "method": "bdev_nvme_set_options", 00:21:05.935 "params": { 00:21:05.935 "action_on_timeout": "none", 00:21:05.935 "timeout_us": 0, 00:21:05.935 "timeout_admin_us": 0, 00:21:05.935 "keep_alive_timeout_ms": 10000, 00:21:05.935 "arbitration_burst": 0, 00:21:05.935 "low_priority_weight": 0, 00:21:05.935 "medium_priority_weight": 0, 00:21:05.935 "high_priority_weight": 0, 00:21:05.935 "nvme_adminq_poll_period_us": 10000, 00:21:05.935 "nvme_ioq_poll_period_us": 0, 00:21:05.935 "io_queue_requests": 512, 00:21:05.935 "delay_cmd_submit": true, 00:21:05.935 "transport_retry_count": 4, 00:21:05.935 "bdev_retry_count": 3, 00:21:05.935 "transport_ack_timeout": 0, 00:21:05.935 "ctrlr_loss_timeout_sec": 0, 00:21:05.935 "reconnect_delay_sec": 0, 00:21:05.935 "fast_io_fail_timeout_sec": 0, 00:21:05.935 "disable_auto_failback": false, 00:21:05.935 "generate_uuids": false, 00:21:05.935 "transport_tos": 0, 00:21:05.935 "nvme_error_stat": false, 00:21:05.935 "rdma_srq_size": 0, 00:21:05.935 "io_path_stat": false, 00:21:05.935 "allow_accel_sequence": false, 00:21:05.935 "rdma_max_cq_size": 0, 00:21:05.935 "rdma_cm_event_timeout_ms": 0, 00:21:05.935 "dhchap_digests": [ 00:21:05.935 "sha256", 00:21:05.935 "sha384", 00:21:05.935 "sha512" 00:21:05.935 ], 00:21:05.935 "dhchap_dhgroups": [ 00:21:05.935 "null", 00:21:05.935 "ffdhe2048", 00:21:05.935 "ffdhe3072", 00:21:05.935 "ffdhe4096", 00:21:05.935 "ffdhe6144", 00:21:05.935 "ffdhe8192" 00:21:05.935 ] 00:21:05.935 } 00:21:05.935 }, 00:21:05.935 { 00:21:05.935 "method": "bdev_nvme_attach_controller", 00:21:05.935 "params": { 00:21:05.935 "name": "TLSTEST", 00:21:05.935 "trtype": "TCP", 00:21:05.935 "adrfam": "IPv4", 00:21:05.935 "traddr": "10.0.0.2", 00:21:05.935 "trsvcid": "4420", 00:21:05.935 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.935 "prchk_reftag": false, 00:21:05.935 "prchk_guard": false, 00:21:05.935 "ctrlr_loss_timeout_sec": 0, 00:21:05.935 "reconnect_delay_sec": 0, 00:21:05.935 "fast_io_fail_timeout_sec": 0, 00:21:05.935 "psk": "/tmp/tmp.9e5d9DS3Pe", 00:21:05.935 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.935 "hdgst": false, 00:21:05.935 "ddgst": false 00:21:05.935 } 00:21:05.935 }, 00:21:05.935 { 00:21:05.935 "method": "bdev_nvme_set_hotplug", 00:21:05.935 "params": { 00:21:05.935 "period_us": 100000, 00:21:05.935 "enable": false 00:21:05.935 } 00:21:05.935 }, 00:21:05.935 { 00:21:05.935 "method": "bdev_wait_for_examine" 00:21:05.935 } 00:21:05.935 ] 00:21:05.935 }, 00:21:05.935 { 00:21:05.935 "subsystem": "nbd", 00:21:05.935 "config": [] 00:21:05.935 } 00:21:05.935 ] 00:21:05.935 }' 00:21:05.935 [2024-07-15 13:51:32.303687] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:05.935 [2024-07-15 13:51:32.303740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1129054 ] 00:21:05.935 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.935 [2024-07-15 13:51:32.353820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.935 [2024-07-15 13:51:32.405877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.195 [2024-07-15 13:51:32.530415] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:06.195 [2024-07-15 13:51:32.530472] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:06.765 13:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:06.765 13:51:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:06.765 13:51:33 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:06.765 Running I/O for 10 seconds... 00:21:16.855 00:21:16.855 Latency(us) 00:21:16.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.855 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:16.855 Verification LBA range: start 0x0 length 0x2000 00:21:16.855 TLSTESTn1 : 10.07 2697.78 10.54 0.00 0.00 47289.80 6225.92 113595.73 00:21:16.855 =================================================================================================================== 00:21:16.855 Total : 2697.78 10.54 0.00 0.00 47289.80 6225.92 113595.73 00:21:16.855 0 00:21:16.855 13:51:43 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:16.855 13:51:43 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1129054 00:21:16.855 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1129054 ']' 00:21:16.855 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1129054 00:21:16.855 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:16.855 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.855 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1129054 00:21:16.855 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:16.855 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:16.855 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1129054' 00:21:16.855 killing process with pid 1129054 00:21:16.855 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1129054 00:21:16.855 Received shutdown signal, test time was about 10.000000 seconds 00:21:16.855 00:21:16.855 Latency(us) 00:21:16.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.855 =================================================================================================================== 00:21:16.855 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.855 [2024-07-15 13:51:43.310906] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:16.855 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1129054 00:21:17.114 13:51:43 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1128785 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1128785 ']' 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1128785 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1128785 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1128785' 00:21:17.115 killing process with pid 1128785 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1128785 00:21:17.115 [2024-07-15 13:51:43.480922] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1128785 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1131158 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1131158 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1131158 ']' 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.115 13:51:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.375 [2024-07-15 13:51:43.659764] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:17.375 [2024-07-15 13:51:43.659815] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.375 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.375 [2024-07-15 13:51:43.723671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.375 [2024-07-15 13:51:43.786032] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.375 [2024-07-15 13:51:43.786072] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.375 [2024-07-15 13:51:43.786079] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.375 [2024-07-15 13:51:43.786085] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.375 [2024-07-15 13:51:43.786091] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.375 [2024-07-15 13:51:43.786112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.946 13:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.946 13:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:17.946 13:51:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:17.946 13:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:17.946 13:51:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.946 13:51:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.946 13:51:44 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.9e5d9DS3Pe 00:21:17.946 13:51:44 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9e5d9DS3Pe 00:21:17.946 13:51:44 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:18.207 [2024-07-15 13:51:44.600912] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.207 13:51:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:18.467 13:51:44 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:18.467 [2024-07-15 13:51:44.925717] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.467 [2024-07-15 13:51:44.925927] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.467 13:51:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:18.727 malloc0 00:21:18.727 13:51:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:18.988 13:51:45 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9e5d9DS3Pe 00:21:18.988 [2024-07-15 13:51:45.413835] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:18.988 13:51:45 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:18.988 13:51:45 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1131518 00:21:18.988 13:51:45 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:18.988 13:51:45 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1131518 /var/tmp/bdevperf.sock 00:21:18.988 13:51:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1131518 ']' 00:21:18.988 13:51:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.988 13:51:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:18.988 13:51:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.988 13:51:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:18.988 13:51:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.988 [2024-07-15 13:51:45.487345] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:18.988 [2024-07-15 13:51:45.487412] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131518 ] 00:21:19.249 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.249 [2024-07-15 13:51:45.564492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.249 [2024-07-15 13:51:45.618545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.820 13:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:19.820 13:51:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:19.821 13:51:46 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9e5d9DS3Pe 00:21:20.081 13:51:46 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:20.081 [2024-07-15 13:51:46.544480] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:20.342 nvme0n1 00:21:20.342 13:51:46 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:20.342 Running I/O for 1 seconds... 00:21:21.285 00:21:21.285 Latency(us) 00:21:21.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.285 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:21.285 Verification LBA range: start 0x0 length 0x2000 00:21:21.285 nvme0n1 : 1.02 3765.03 14.71 0.00 0.00 33619.63 5133.65 103983.79 00:21:21.285 =================================================================================================================== 00:21:21.285 Total : 3765.03 14.71 0.00 0.00 33619.63 5133.65 103983.79 00:21:21.285 0 00:21:21.285 13:51:47 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1131518 00:21:21.285 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1131518 ']' 00:21:21.285 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1131518 00:21:21.285 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:21.285 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:21.285 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1131518 00:21:21.285 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:21.285 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:21.285 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1131518' 00:21:21.285 killing process with pid 1131518 00:21:21.285 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1131518 00:21:21.285 Received shutdown signal, test time was about 1.000000 seconds 00:21:21.285 00:21:21.285 Latency(us) 00:21:21.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.285 =================================================================================================================== 00:21:21.285 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:21.285 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1131518 00:21:21.544 13:51:47 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1131158 00:21:21.544 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1131158 ']' 00:21:21.544 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1131158 00:21:21.544 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:21.545 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:21.545 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1131158 00:21:21.545 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:21.545 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:21.545 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1131158' 00:21:21.545 killing process with pid 1131158 00:21:21.545 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1131158 00:21:21.545 [2024-07-15 13:51:47.979686] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:21.545 13:51:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1131158 00:21:21.805 13:51:48 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:21.805 13:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:21.805 13:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:21.805 13:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.805 13:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1132193 00:21:21.805 13:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1132193 00:21:21.805 13:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:21.805 13:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1132193 ']' 00:21:21.805 13:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.805 13:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.805 13:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.805 13:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.805 13:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.805 [2024-07-15 13:51:48.187338] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:21.805 [2024-07-15 13:51:48.187402] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.805 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.805 [2024-07-15 13:51:48.252791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.805 [2024-07-15 13:51:48.315071] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.805 [2024-07-15 13:51:48.315110] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.805 [2024-07-15 13:51:48.315117] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.805 [2024-07-15 13:51:48.315127] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.805 [2024-07-15 13:51:48.315133] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.805 [2024-07-15 13:51:48.315154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.748 13:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:22.748 13:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:22.748 13:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:22.748 13:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:22.748 13:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.748 13:51:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.748 13:51:48 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:22.748 13:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.748 13:51:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.748 [2024-07-15 13:51:48.993604] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.748 malloc0 00:21:22.748 [2024-07-15 13:51:49.020361] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.748 [2024-07-15 13:51:49.020595] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.748 13:51:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.748 13:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1132245 00:21:22.748 13:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1132245 /var/tmp/bdevperf.sock 00:21:22.748 13:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:22.748 13:51:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1132245 ']' 00:21:22.748 13:51:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.748 13:51:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.748 13:51:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.748 13:51:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.748 13:51:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.748 [2024-07-15 13:51:49.098414] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:22.748 [2024-07-15 13:51:49.098462] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132245 ] 00:21:22.748 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.748 [2024-07-15 13:51:49.170990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.748 [2024-07-15 13:51:49.224790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.689 13:51:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.689 13:51:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:23.689 13:51:49 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9e5d9DS3Pe 00:21:23.689 13:51:50 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:23.689 [2024-07-15 13:51:50.155016] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.949 nvme0n1 00:21:23.949 13:51:50 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:23.949 Running I/O for 1 seconds... 00:21:24.888 00:21:24.888 Latency(us) 00:21:24.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.888 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:24.888 Verification LBA range: start 0x0 length 0x2000 00:21:24.888 nvme0n1 : 1.04 1962.94 7.67 0.00 0.00 64274.47 6307.84 109226.67 00:21:24.888 =================================================================================================================== 00:21:24.888 Total : 1962.94 7.67 0.00 0.00 64274.47 6307.84 109226.67 00:21:24.888 0 00:21:24.888 13:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:24.888 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.888 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.148 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.148 13:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:25.148 "subsystems": [ 00:21:25.148 { 00:21:25.148 "subsystem": "keyring", 00:21:25.148 "config": [ 00:21:25.148 { 00:21:25.148 "method": "keyring_file_add_key", 00:21:25.148 "params": { 00:21:25.148 "name": "key0", 00:21:25.148 "path": "/tmp/tmp.9e5d9DS3Pe" 00:21:25.148 } 00:21:25.148 } 00:21:25.148 ] 00:21:25.148 }, 00:21:25.148 { 00:21:25.148 "subsystem": "iobuf", 00:21:25.148 "config": [ 00:21:25.148 { 00:21:25.148 "method": "iobuf_set_options", 00:21:25.148 "params": { 00:21:25.148 "small_pool_count": 8192, 00:21:25.148 "large_pool_count": 1024, 00:21:25.148 "small_bufsize": 8192, 00:21:25.148 "large_bufsize": 135168 00:21:25.148 } 00:21:25.148 } 00:21:25.148 ] 00:21:25.148 }, 00:21:25.148 { 00:21:25.148 "subsystem": "sock", 00:21:25.148 "config": [ 00:21:25.148 { 00:21:25.148 "method": "sock_set_default_impl", 00:21:25.148 "params": { 00:21:25.148 "impl_name": "posix" 00:21:25.148 } 00:21:25.148 }, 00:21:25.148 { 00:21:25.148 "method": "sock_impl_set_options", 00:21:25.148 "params": { 00:21:25.148 "impl_name": "ssl", 00:21:25.148 "recv_buf_size": 4096, 00:21:25.148 "send_buf_size": 4096, 00:21:25.148 "enable_recv_pipe": true, 00:21:25.149 "enable_quickack": false, 00:21:25.149 "enable_placement_id": 0, 00:21:25.149 "enable_zerocopy_send_server": true, 00:21:25.149 "enable_zerocopy_send_client": false, 00:21:25.149 "zerocopy_threshold": 0, 00:21:25.149 "tls_version": 0, 00:21:25.149 "enable_ktls": false 00:21:25.149 } 00:21:25.149 }, 00:21:25.149 { 00:21:25.149 "method": "sock_impl_set_options", 00:21:25.149 "params": { 00:21:25.149 "impl_name": "posix", 00:21:25.149 "recv_buf_size": 2097152, 00:21:25.149 "send_buf_size": 2097152, 00:21:25.149 "enable_recv_pipe": true, 00:21:25.149 "enable_quickack": false, 00:21:25.149 "enable_placement_id": 0, 00:21:25.149 "enable_zerocopy_send_server": true, 00:21:25.149 "enable_zerocopy_send_client": false, 00:21:25.149 "zerocopy_threshold": 0, 00:21:25.149 "tls_version": 0, 00:21:25.149 "enable_ktls": false 00:21:25.149 } 00:21:25.149 } 00:21:25.149 ] 00:21:25.149 }, 00:21:25.149 { 00:21:25.149 "subsystem": "vmd", 00:21:25.149 "config": [] 00:21:25.149 }, 00:21:25.149 { 00:21:25.149 "subsystem": "accel", 00:21:25.149 "config": [ 00:21:25.149 { 00:21:25.149 "method": "accel_set_options", 00:21:25.149 "params": { 00:21:25.149 "small_cache_size": 128, 00:21:25.149 "large_cache_size": 16, 00:21:25.149 "task_count": 2048, 00:21:25.149 "sequence_count": 2048, 00:21:25.149 "buf_count": 2048 00:21:25.149 } 00:21:25.149 } 00:21:25.149 ] 00:21:25.149 }, 00:21:25.149 { 00:21:25.149 "subsystem": "bdev", 00:21:25.149 "config": [ 00:21:25.149 { 00:21:25.149 "method": "bdev_set_options", 00:21:25.149 "params": { 00:21:25.149 "bdev_io_pool_size": 65535, 00:21:25.149 "bdev_io_cache_size": 256, 00:21:25.149 "bdev_auto_examine": true, 00:21:25.149 "iobuf_small_cache_size": 128, 00:21:25.149 "iobuf_large_cache_size": 16 00:21:25.149 } 00:21:25.149 }, 00:21:25.149 { 00:21:25.149 "method": "bdev_raid_set_options", 00:21:25.149 "params": { 00:21:25.149 "process_window_size_kb": 1024 00:21:25.149 } 00:21:25.149 }, 00:21:25.149 { 00:21:25.149 "method": "bdev_iscsi_set_options", 00:21:25.149 "params": { 00:21:25.149 "timeout_sec": 30 00:21:25.149 } 00:21:25.149 }, 00:21:25.149 { 00:21:25.149 "method": "bdev_nvme_set_options", 00:21:25.149 "params": { 00:21:25.149 "action_on_timeout": "none", 00:21:25.149 "timeout_us": 0, 00:21:25.149 "timeout_admin_us": 0, 00:21:25.149 "keep_alive_timeout_ms": 10000, 00:21:25.149 "arbitration_burst": 0, 00:21:25.149 "low_priority_weight": 0, 00:21:25.149 "medium_priority_weight": 0, 00:21:25.149 "high_priority_weight": 0, 00:21:25.149 "nvme_adminq_poll_period_us": 10000, 00:21:25.149 "nvme_ioq_poll_period_us": 0, 00:21:25.149 "io_queue_requests": 0, 00:21:25.149 "delay_cmd_submit": true, 00:21:25.149 "transport_retry_count": 4, 00:21:25.149 "bdev_retry_count": 3, 00:21:25.149 "transport_ack_timeout": 0, 00:21:25.149 "ctrlr_loss_timeout_sec": 0, 00:21:25.149 "reconnect_delay_sec": 0, 00:21:25.149 "fast_io_fail_timeout_sec": 0, 00:21:25.149 "disable_auto_failback": false, 00:21:25.149 "generate_uuids": false, 00:21:25.149 "transport_tos": 0, 00:21:25.149 "nvme_error_stat": false, 00:21:25.149 "rdma_srq_size": 0, 00:21:25.149 "io_path_stat": false, 00:21:25.149 "allow_accel_sequence": false, 00:21:25.149 "rdma_max_cq_size": 0, 00:21:25.149 "rdma_cm_event_timeout_ms": 0, 00:21:25.149 "dhchap_digests": [ 00:21:25.149 "sha256", 00:21:25.149 "sha384", 00:21:25.149 "sha512" 00:21:25.149 ], 00:21:25.149 "dhchap_dhgroups": [ 00:21:25.149 "null", 00:21:25.149 "ffdhe2048", 00:21:25.149 "ffdhe3072", 00:21:25.149 "ffdhe4096", 00:21:25.149 "ffdhe6144", 00:21:25.149 "ffdhe8192" 00:21:25.149 ] 00:21:25.149 } 00:21:25.149 }, 00:21:25.149 { 00:21:25.149 "method": "bdev_nvme_set_hotplug", 00:21:25.149 "params": { 00:21:25.149 "period_us": 100000, 00:21:25.149 "enable": false 00:21:25.149 } 00:21:25.149 }, 00:21:25.149 { 00:21:25.149 "method": "bdev_malloc_create", 00:21:25.149 "params": { 00:21:25.149 "name": "malloc0", 00:21:25.149 "num_blocks": 8192, 00:21:25.149 "block_size": 4096, 00:21:25.149 "physical_block_size": 4096, 00:21:25.149 "uuid": "4a1952d4-0b9f-4ab3-9a31-b0be6e48ce43", 00:21:25.149 "optimal_io_boundary": 0 00:21:25.149 } 00:21:25.149 }, 00:21:25.149 { 00:21:25.149 "method": "bdev_wait_for_examine" 00:21:25.149 } 00:21:25.149 ] 00:21:25.149 }, 00:21:25.149 { 00:21:25.149 "subsystem": "nbd", 00:21:25.149 "config": [] 00:21:25.149 }, 00:21:25.149 { 00:21:25.149 "subsystem": "scheduler", 00:21:25.149 "config": [ 00:21:25.149 { 00:21:25.149 "method": "framework_set_scheduler", 00:21:25.149 "params": { 00:21:25.149 "name": "static" 00:21:25.149 } 00:21:25.149 } 00:21:25.149 ] 00:21:25.149 }, 00:21:25.149 { 00:21:25.149 "subsystem": "nvmf", 00:21:25.149 "config": [ 00:21:25.149 { 00:21:25.149 "method": "nvmf_set_config", 00:21:25.149 "params": { 00:21:25.149 "discovery_filter": "match_any", 00:21:25.149 "admin_cmd_passthru": { 00:21:25.149 "identify_ctrlr": false 00:21:25.149 } 00:21:25.149 } 00:21:25.149 }, 00:21:25.149 { 00:21:25.149 "method": "nvmf_set_max_subsystems", 00:21:25.149 "params": { 00:21:25.149 "max_subsystems": 1024 00:21:25.149 } 00:21:25.149 }, 00:21:25.149 { 00:21:25.149 "method": "nvmf_set_crdt", 00:21:25.149 "params": { 00:21:25.149 "crdt1": 0, 00:21:25.149 "crdt2": 0, 00:21:25.149 "crdt3": 0 00:21:25.149 } 00:21:25.149 }, 00:21:25.149 { 00:21:25.149 "method": "nvmf_create_transport", 00:21:25.149 "params": { 00:21:25.149 "trtype": "TCP", 00:21:25.149 "max_queue_depth": 128, 00:21:25.149 "max_io_qpairs_per_ctrlr": 127, 00:21:25.149 "in_capsule_data_size": 4096, 00:21:25.149 "max_io_size": 131072, 00:21:25.149 "io_unit_size": 131072, 00:21:25.149 "max_aq_depth": 128, 00:21:25.149 "num_shared_buffers": 511, 00:21:25.149 "buf_cache_size": 4294967295, 00:21:25.149 "dif_insert_or_strip": false, 00:21:25.149 "zcopy": false, 00:21:25.149 "c2h_success": false, 00:21:25.150 "sock_priority": 0, 00:21:25.150 "abort_timeout_sec": 1, 00:21:25.150 "ack_timeout": 0, 00:21:25.150 "data_wr_pool_size": 0 00:21:25.150 } 00:21:25.150 }, 00:21:25.150 { 00:21:25.150 "method": "nvmf_create_subsystem", 00:21:25.150 "params": { 00:21:25.150 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.150 "allow_any_host": false, 00:21:25.150 "serial_number": "00000000000000000000", 00:21:25.150 "model_number": "SPDK bdev Controller", 00:21:25.150 "max_namespaces": 32, 00:21:25.150 "min_cntlid": 1, 00:21:25.150 "max_cntlid": 65519, 00:21:25.150 "ana_reporting": false 00:21:25.150 } 00:21:25.150 }, 00:21:25.150 { 00:21:25.150 "method": "nvmf_subsystem_add_host", 00:21:25.150 "params": { 00:21:25.150 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.150 "host": "nqn.2016-06.io.spdk:host1", 00:21:25.150 "psk": "key0" 00:21:25.150 } 00:21:25.150 }, 00:21:25.150 { 00:21:25.150 "method": "nvmf_subsystem_add_ns", 00:21:25.150 "params": { 00:21:25.150 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.150 "namespace": { 00:21:25.150 "nsid": 1, 00:21:25.150 "bdev_name": "malloc0", 00:21:25.150 "nguid": "4A1952D40B9F4AB39A31B0BE6E48CE43", 00:21:25.150 "uuid": "4a1952d4-0b9f-4ab3-9a31-b0be6e48ce43", 00:21:25.150 "no_auto_visible": false 00:21:25.150 } 00:21:25.150 } 00:21:25.150 }, 00:21:25.150 { 00:21:25.150 "method": "nvmf_subsystem_add_listener", 00:21:25.150 "params": { 00:21:25.150 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.150 "listen_address": { 00:21:25.150 "trtype": "TCP", 00:21:25.150 "adrfam": "IPv4", 00:21:25.150 "traddr": "10.0.0.2", 00:21:25.150 "trsvcid": "4420" 00:21:25.150 }, 00:21:25.150 "secure_channel": true 00:21:25.150 } 00:21:25.150 } 00:21:25.150 ] 00:21:25.150 } 00:21:25.150 ] 00:21:25.150 }' 00:21:25.150 13:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:25.410 13:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:25.410 "subsystems": [ 00:21:25.410 { 00:21:25.410 "subsystem": "keyring", 00:21:25.410 "config": [ 00:21:25.410 { 00:21:25.410 "method": "keyring_file_add_key", 00:21:25.410 "params": { 00:21:25.410 "name": "key0", 00:21:25.410 "path": "/tmp/tmp.9e5d9DS3Pe" 00:21:25.410 } 00:21:25.410 } 00:21:25.410 ] 00:21:25.410 }, 00:21:25.410 { 00:21:25.410 "subsystem": "iobuf", 00:21:25.410 "config": [ 00:21:25.410 { 00:21:25.410 "method": "iobuf_set_options", 00:21:25.410 "params": { 00:21:25.410 "small_pool_count": 8192, 00:21:25.410 "large_pool_count": 1024, 00:21:25.410 "small_bufsize": 8192, 00:21:25.410 "large_bufsize": 135168 00:21:25.410 } 00:21:25.410 } 00:21:25.410 ] 00:21:25.410 }, 00:21:25.410 { 00:21:25.410 "subsystem": "sock", 00:21:25.410 "config": [ 00:21:25.410 { 00:21:25.410 "method": "sock_set_default_impl", 00:21:25.410 "params": { 00:21:25.410 "impl_name": "posix" 00:21:25.410 } 00:21:25.410 }, 00:21:25.410 { 00:21:25.410 "method": "sock_impl_set_options", 00:21:25.410 "params": { 00:21:25.410 "impl_name": "ssl", 00:21:25.410 "recv_buf_size": 4096, 00:21:25.410 "send_buf_size": 4096, 00:21:25.410 "enable_recv_pipe": true, 00:21:25.410 "enable_quickack": false, 00:21:25.410 "enable_placement_id": 0, 00:21:25.410 "enable_zerocopy_send_server": true, 00:21:25.410 "enable_zerocopy_send_client": false, 00:21:25.410 "zerocopy_threshold": 0, 00:21:25.410 "tls_version": 0, 00:21:25.410 "enable_ktls": false 00:21:25.410 } 00:21:25.410 }, 00:21:25.410 { 00:21:25.410 "method": "sock_impl_set_options", 00:21:25.410 "params": { 00:21:25.410 "impl_name": "posix", 00:21:25.410 "recv_buf_size": 2097152, 00:21:25.410 "send_buf_size": 2097152, 00:21:25.410 "enable_recv_pipe": true, 00:21:25.410 "enable_quickack": false, 00:21:25.410 "enable_placement_id": 0, 00:21:25.410 "enable_zerocopy_send_server": true, 00:21:25.410 "enable_zerocopy_send_client": false, 00:21:25.410 "zerocopy_threshold": 0, 00:21:25.410 "tls_version": 0, 00:21:25.410 "enable_ktls": false 00:21:25.410 } 00:21:25.410 } 00:21:25.410 ] 00:21:25.410 }, 00:21:25.410 { 00:21:25.410 "subsystem": "vmd", 00:21:25.410 "config": [] 00:21:25.410 }, 00:21:25.410 { 00:21:25.410 "subsystem": "accel", 00:21:25.410 "config": [ 00:21:25.410 { 00:21:25.410 "method": "accel_set_options", 00:21:25.410 "params": { 00:21:25.411 "small_cache_size": 128, 00:21:25.411 "large_cache_size": 16, 00:21:25.411 "task_count": 2048, 00:21:25.411 "sequence_count": 2048, 00:21:25.411 "buf_count": 2048 00:21:25.411 } 00:21:25.411 } 00:21:25.411 ] 00:21:25.411 }, 00:21:25.411 { 00:21:25.411 "subsystem": "bdev", 00:21:25.411 "config": [ 00:21:25.411 { 00:21:25.411 "method": "bdev_set_options", 00:21:25.411 "params": { 00:21:25.411 "bdev_io_pool_size": 65535, 00:21:25.411 "bdev_io_cache_size": 256, 00:21:25.411 "bdev_auto_examine": true, 00:21:25.411 "iobuf_small_cache_size": 128, 00:21:25.411 "iobuf_large_cache_size": 16 00:21:25.411 } 00:21:25.411 }, 00:21:25.411 { 00:21:25.411 "method": "bdev_raid_set_options", 00:21:25.411 "params": { 00:21:25.411 "process_window_size_kb": 1024 00:21:25.411 } 00:21:25.411 }, 00:21:25.411 { 00:21:25.411 "method": "bdev_iscsi_set_options", 00:21:25.411 "params": { 00:21:25.411 "timeout_sec": 30 00:21:25.411 } 00:21:25.411 }, 00:21:25.411 { 00:21:25.411 "method": "bdev_nvme_set_options", 00:21:25.411 "params": { 00:21:25.411 "action_on_timeout": "none", 00:21:25.411 "timeout_us": 0, 00:21:25.411 "timeout_admin_us": 0, 00:21:25.411 "keep_alive_timeout_ms": 10000, 00:21:25.411 "arbitration_burst": 0, 00:21:25.411 "low_priority_weight": 0, 00:21:25.411 "medium_priority_weight": 0, 00:21:25.411 "high_priority_weight": 0, 00:21:25.411 "nvme_adminq_poll_period_us": 10000, 00:21:25.411 "nvme_ioq_poll_period_us": 0, 00:21:25.411 "io_queue_requests": 512, 00:21:25.411 "delay_cmd_submit": true, 00:21:25.411 "transport_retry_count": 4, 00:21:25.411 "bdev_retry_count": 3, 00:21:25.411 "transport_ack_timeout": 0, 00:21:25.411 "ctrlr_loss_timeout_sec": 0, 00:21:25.411 "reconnect_delay_sec": 0, 00:21:25.411 "fast_io_fail_timeout_sec": 0, 00:21:25.411 "disable_auto_failback": false, 00:21:25.411 "generate_uuids": false, 00:21:25.411 "transport_tos": 0, 00:21:25.411 "nvme_error_stat": false, 00:21:25.411 "rdma_srq_size": 0, 00:21:25.411 "io_path_stat": false, 00:21:25.411 "allow_accel_sequence": false, 00:21:25.411 "rdma_max_cq_size": 0, 00:21:25.411 "rdma_cm_event_timeout_ms": 0, 00:21:25.411 "dhchap_digests": [ 00:21:25.411 "sha256", 00:21:25.411 "sha384", 00:21:25.411 "sha512" 00:21:25.411 ], 00:21:25.411 "dhchap_dhgroups": [ 00:21:25.411 "null", 00:21:25.411 "ffdhe2048", 00:21:25.411 "ffdhe3072", 00:21:25.411 "ffdhe4096", 00:21:25.411 "ffdhe6144", 00:21:25.411 "ffdhe8192" 00:21:25.411 ] 00:21:25.411 } 00:21:25.411 }, 00:21:25.411 { 00:21:25.411 "method": "bdev_nvme_attach_controller", 00:21:25.411 "params": { 00:21:25.411 "name": "nvme0", 00:21:25.411 "trtype": "TCP", 00:21:25.411 "adrfam": "IPv4", 00:21:25.411 "traddr": "10.0.0.2", 00:21:25.411 "trsvcid": "4420", 00:21:25.411 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.411 "prchk_reftag": false, 00:21:25.411 "prchk_guard": false, 00:21:25.411 "ctrlr_loss_timeout_sec": 0, 00:21:25.411 "reconnect_delay_sec": 0, 00:21:25.411 "fast_io_fail_timeout_sec": 0, 00:21:25.411 "psk": "key0", 00:21:25.411 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:25.411 "hdgst": false, 00:21:25.411 "ddgst": false 00:21:25.411 } 00:21:25.411 }, 00:21:25.411 { 00:21:25.411 "method": "bdev_nvme_set_hotplug", 00:21:25.411 "params": { 00:21:25.411 "period_us": 100000, 00:21:25.411 "enable": false 00:21:25.411 } 00:21:25.411 }, 00:21:25.411 { 00:21:25.411 "method": "bdev_enable_histogram", 00:21:25.411 "params": { 00:21:25.411 "name": "nvme0n1", 00:21:25.411 "enable": true 00:21:25.411 } 00:21:25.411 }, 00:21:25.411 { 00:21:25.411 "method": "bdev_wait_for_examine" 00:21:25.411 } 00:21:25.411 ] 00:21:25.411 }, 00:21:25.411 { 00:21:25.411 "subsystem": "nbd", 00:21:25.411 "config": [] 00:21:25.411 } 00:21:25.411 ] 00:21:25.411 }' 00:21:25.411 13:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1132245 00:21:25.411 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1132245 ']' 00:21:25.411 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1132245 00:21:25.411 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:25.411 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:25.411 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1132245 00:21:25.411 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:25.411 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:25.411 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1132245' 00:21:25.411 killing process with pid 1132245 00:21:25.411 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1132245 00:21:25.411 Received shutdown signal, test time was about 1.000000 seconds 00:21:25.411 00:21:25.411 Latency(us) 00:21:25.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.411 =================================================================================================================== 00:21:25.411 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.411 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1132245 00:21:25.411 13:51:51 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1132193 00:21:25.411 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1132193 ']' 00:21:25.411 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1132193 00:21:25.411 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:25.411 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:25.411 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1132193 00:21:25.672 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:25.672 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:25.672 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1132193' 00:21:25.672 killing process with pid 1132193 00:21:25.672 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1132193 00:21:25.672 13:51:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1132193 00:21:25.672 13:51:52 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:25.672 13:51:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:25.672 13:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:25.672 13:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.672 13:51:52 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:25.672 "subsystems": [ 00:21:25.672 { 00:21:25.672 "subsystem": "keyring", 00:21:25.672 "config": [ 00:21:25.672 { 00:21:25.672 "method": "keyring_file_add_key", 00:21:25.672 "params": { 00:21:25.672 "name": "key0", 00:21:25.672 "path": "/tmp/tmp.9e5d9DS3Pe" 00:21:25.672 } 00:21:25.672 } 00:21:25.672 ] 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "subsystem": "iobuf", 00:21:25.672 "config": [ 00:21:25.672 { 00:21:25.672 "method": "iobuf_set_options", 00:21:25.672 "params": { 00:21:25.672 "small_pool_count": 8192, 00:21:25.672 "large_pool_count": 1024, 00:21:25.672 "small_bufsize": 8192, 00:21:25.672 "large_bufsize": 135168 00:21:25.672 } 00:21:25.672 } 00:21:25.672 ] 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "subsystem": "sock", 00:21:25.672 "config": [ 00:21:25.672 { 00:21:25.672 "method": "sock_set_default_impl", 00:21:25.672 "params": { 00:21:25.672 "impl_name": "posix" 00:21:25.672 } 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "method": "sock_impl_set_options", 00:21:25.672 "params": { 00:21:25.672 "impl_name": "ssl", 00:21:25.672 "recv_buf_size": 4096, 00:21:25.672 "send_buf_size": 4096, 00:21:25.672 "enable_recv_pipe": true, 00:21:25.672 "enable_quickack": false, 00:21:25.672 "enable_placement_id": 0, 00:21:25.672 "enable_zerocopy_send_server": true, 00:21:25.672 "enable_zerocopy_send_client": false, 00:21:25.672 "zerocopy_threshold": 0, 00:21:25.672 "tls_version": 0, 00:21:25.672 "enable_ktls": false 00:21:25.672 } 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "method": "sock_impl_set_options", 00:21:25.672 "params": { 00:21:25.672 "impl_name": "posix", 00:21:25.672 "recv_buf_size": 2097152, 00:21:25.672 "send_buf_size": 2097152, 00:21:25.672 "enable_recv_pipe": true, 00:21:25.672 "enable_quickack": false, 00:21:25.672 "enable_placement_id": 0, 00:21:25.672 "enable_zerocopy_send_server": true, 00:21:25.672 "enable_zerocopy_send_client": false, 00:21:25.672 "zerocopy_threshold": 0, 00:21:25.672 "tls_version": 0, 00:21:25.672 "enable_ktls": false 00:21:25.672 } 00:21:25.672 } 00:21:25.672 ] 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "subsystem": "vmd", 00:21:25.672 "config": [] 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "subsystem": "accel", 00:21:25.672 "config": [ 00:21:25.672 { 00:21:25.672 "method": "accel_set_options", 00:21:25.672 "params": { 00:21:25.672 "small_cache_size": 128, 00:21:25.672 "large_cache_size": 16, 00:21:25.672 "task_count": 2048, 00:21:25.672 "sequence_count": 2048, 00:21:25.672 "buf_count": 2048 00:21:25.672 } 00:21:25.672 } 00:21:25.672 ] 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "subsystem": "bdev", 00:21:25.672 "config": [ 00:21:25.672 { 00:21:25.672 "method": "bdev_set_options", 00:21:25.672 "params": { 00:21:25.672 "bdev_io_pool_size": 65535, 00:21:25.672 "bdev_io_cache_size": 256, 00:21:25.672 "bdev_auto_examine": true, 00:21:25.672 "iobuf_small_cache_size": 128, 00:21:25.672 "iobuf_large_cache_size": 16 00:21:25.672 } 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "method": "bdev_raid_set_options", 00:21:25.672 "params": { 00:21:25.672 "process_window_size_kb": 1024 00:21:25.672 } 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "method": "bdev_iscsi_set_options", 00:21:25.672 "params": { 00:21:25.672 "timeout_sec": 30 00:21:25.672 } 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "method": "bdev_nvme_set_options", 00:21:25.672 "params": { 00:21:25.672 "action_on_timeout": "none", 00:21:25.672 "timeout_us": 0, 00:21:25.672 "timeout_admin_us": 0, 00:21:25.672 "keep_alive_timeout_ms": 10000, 00:21:25.672 "arbitration_burst": 0, 00:21:25.672 "low_priority_weight": 0, 00:21:25.672 "medium_priority_weight": 0, 00:21:25.672 "high_priority_weight": 0, 00:21:25.672 "nvme_adminq_poll_period_us": 10000, 00:21:25.672 "nvme_ioq_poll_period_us": 0, 00:21:25.672 "io_queue_requests": 0, 00:21:25.672 "delay_cmd_submit": true, 00:21:25.672 "transport_retry_count": 4, 00:21:25.672 "bdev_retry_count": 3, 00:21:25.672 "transport_ack_timeout": 0, 00:21:25.672 "ctrlr_loss_timeout_sec": 0, 00:21:25.672 "reconnect_delay_sec": 0, 00:21:25.672 "fast_io_fail_timeout_sec": 0, 00:21:25.672 "disable_auto_failback": false, 00:21:25.672 "generate_uuids": false, 00:21:25.672 "transport_tos": 0, 00:21:25.672 "nvme_error_stat": false, 00:21:25.672 "rdma_srq_size": 0, 00:21:25.672 "io_path_stat": false, 00:21:25.672 "allow_accel_sequence": false, 00:21:25.672 "rdma_max_cq_size": 0, 00:21:25.672 "rdma_cm_event_timeout_ms": 0, 00:21:25.672 "dhchap_digests": [ 00:21:25.672 "sha256", 00:21:25.672 "sha384", 00:21:25.672 "sha512" 00:21:25.672 ], 00:21:25.672 "dhchap_dhgroups": [ 00:21:25.672 "null", 00:21:25.672 "ffdhe2048", 00:21:25.672 "ffdhe3072", 00:21:25.672 "ffdhe4096", 00:21:25.672 "ffdhe6144", 00:21:25.672 "ffdhe8192" 00:21:25.672 ] 00:21:25.672 } 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "method": "bdev_nvme_set_hotplug", 00:21:25.672 "params": { 00:21:25.672 "period_us": 100000, 00:21:25.672 "enable": false 00:21:25.672 } 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "method": "bdev_malloc_create", 00:21:25.672 "params": { 00:21:25.672 "name": "malloc0", 00:21:25.672 "num_blocks": 8192, 00:21:25.672 "block_size": 4096, 00:21:25.672 "physical_block_size": 4096, 00:21:25.672 "uuid": "4a1952d4-0b9f-4ab3-9a31-b0be6e48ce43", 00:21:25.672 "optimal_io_boundary": 0 00:21:25.672 } 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "method": "bdev_wait_for_examine" 00:21:25.672 } 00:21:25.672 ] 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "subsystem": "nbd", 00:21:25.672 "config": [] 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "subsystem": "scheduler", 00:21:25.672 "config": [ 00:21:25.672 { 00:21:25.672 "method": "framework_set_scheduler", 00:21:25.672 "params": { 00:21:25.672 "name": "static" 00:21:25.672 } 00:21:25.672 } 00:21:25.672 ] 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "subsystem": "nvmf", 00:21:25.672 "config": [ 00:21:25.672 { 00:21:25.672 "method": "nvmf_set_config", 00:21:25.672 "params": { 00:21:25.672 "discovery_filter": "match_any", 00:21:25.672 "admin_cmd_passthru": { 00:21:25.672 "identify_ctrlr": false 00:21:25.672 } 00:21:25.672 } 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "method": "nvmf_set_max_subsystems", 00:21:25.672 "params": { 00:21:25.672 "max_subsystems": 1024 00:21:25.672 } 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "method": "nvmf_set_crdt", 00:21:25.672 "params": { 00:21:25.672 "crdt1": 0, 00:21:25.672 "crdt2": 0, 00:21:25.672 "crdt3": 0 00:21:25.672 } 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "method": "nvmf_create_transport", 00:21:25.672 "params": { 00:21:25.672 "trtype": "TCP", 00:21:25.672 "max_queue_depth": 128, 00:21:25.672 "max_io_qpairs_per_ctrlr": 127, 00:21:25.672 "in_capsule_data_size": 4096, 00:21:25.672 "max_io_size": 131072, 00:21:25.672 "io_unit_size": 131072, 00:21:25.672 "max_aq_depth": 128, 00:21:25.672 "num_shared_buffers": 511, 00:21:25.672 "buf_cache_size": 4294967295, 00:21:25.672 "dif_insert_or_strip": false, 00:21:25.672 "zcopy": false, 00:21:25.672 "c2h_success": false, 00:21:25.673 "sock_priority": 0, 00:21:25.673 "abort_timeout_sec": 1, 00:21:25.673 "ack_timeout": 0, 00:21:25.673 "data_wr_pool_size": 0 00:21:25.673 } 00:21:25.673 }, 00:21:25.673 { 00:21:25.673 "method": "nvmf_create_subsystem", 00:21:25.673 "params": { 00:21:25.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.673 "allow_any_host": false, 00:21:25.673 "serial_number": "00000000000000000000", 00:21:25.673 "model_number": "SPDK bdev Controller", 00:21:25.673 "max_namespaces": 32, 00:21:25.673 "min_cntlid": 1, 00:21:25.673 "max_cntlid": 65519, 00:21:25.673 "ana_reporting": false 00:21:25.673 } 00:21:25.673 }, 00:21:25.673 { 00:21:25.673 "method": "nvmf_subsystem_add_host", 00:21:25.673 "params": { 00:21:25.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.673 "host": "nqn.2016-06.io.spdk:host1", 00:21:25.673 "psk": "key0" 00:21:25.673 } 00:21:25.673 }, 00:21:25.673 { 00:21:25.673 "method": "nvmf_subsystem_add_ns", 00:21:25.673 "params": { 00:21:25.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.673 "namespace": { 00:21:25.673 "nsid": 1, 00:21:25.673 "bdev_name": "malloc0", 00:21:25.673 "nguid": "4A1952D40B9F4AB39A31B0BE6E48CE43", 00:21:25.673 "uuid": "4a1952d4-0b9f-4ab3-9a31-b0be6e48ce43", 00:21:25.673 "no_auto_visible": false 00:21:25.673 } 00:21:25.673 } 00:21:25.673 }, 00:21:25.673 { 00:21:25.673 "method": "nvmf_subsystem_add_listener", 00:21:25.673 "params": { 00:21:25.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.673 "listen_address": { 00:21:25.673 "trtype": "TCP", 00:21:25.673 "adrfam": "IPv4", 00:21:25.673 "traddr": "10.0.0.2", 00:21:25.673 "trsvcid": "4420" 00:21:25.673 }, 00:21:25.673 "secure_channel": true 00:21:25.673 } 00:21:25.673 } 00:21:25.673 ] 00:21:25.673 } 00:21:25.673 ] 00:21:25.673 }' 00:21:25.673 13:51:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1132910 00:21:25.673 13:51:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1132910 00:21:25.673 13:51:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:25.673 13:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1132910 ']' 00:21:25.673 13:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.673 13:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:25.673 13:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.673 13:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:25.673 13:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.673 [2024-07-15 13:51:52.156387] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:25.673 [2024-07-15 13:51:52.156445] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.673 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.932 [2024-07-15 13:51:52.220576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.932 [2024-07-15 13:51:52.284980] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.932 [2024-07-15 13:51:52.285015] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.932 [2024-07-15 13:51:52.285023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.932 [2024-07-15 13:51:52.285029] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.932 [2024-07-15 13:51:52.285034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.932 [2024-07-15 13:51:52.285086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.192 [2024-07-15 13:51:52.482183] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.192 [2024-07-15 13:51:52.514179] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:26.192 [2024-07-15 13:51:52.531309] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.452 13:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:26.452 13:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:26.452 13:51:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:26.452 13:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:26.452 13:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.452 13:51:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.452 13:51:52 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1133175 00:21:26.452 13:51:52 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1133175 /var/tmp/bdevperf.sock 00:21:26.452 13:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1133175 ']' 00:21:26.452 13:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.452 13:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.452 13:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.452 13:51:52 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:26.452 13:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.452 13:51:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.452 13:51:52 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:26.452 "subsystems": [ 00:21:26.452 { 00:21:26.452 "subsystem": "keyring", 00:21:26.452 "config": [ 00:21:26.452 { 00:21:26.452 "method": "keyring_file_add_key", 00:21:26.452 "params": { 00:21:26.452 "name": "key0", 00:21:26.452 "path": "/tmp/tmp.9e5d9DS3Pe" 00:21:26.452 } 00:21:26.452 } 00:21:26.452 ] 00:21:26.452 }, 00:21:26.452 { 00:21:26.452 "subsystem": "iobuf", 00:21:26.452 "config": [ 00:21:26.452 { 00:21:26.452 "method": "iobuf_set_options", 00:21:26.452 "params": { 00:21:26.452 "small_pool_count": 8192, 00:21:26.452 "large_pool_count": 1024, 00:21:26.452 "small_bufsize": 8192, 00:21:26.452 "large_bufsize": 135168 00:21:26.452 } 00:21:26.452 } 00:21:26.452 ] 00:21:26.452 }, 00:21:26.452 { 00:21:26.452 "subsystem": "sock", 00:21:26.452 "config": [ 00:21:26.452 { 00:21:26.452 "method": "sock_set_default_impl", 00:21:26.452 "params": { 00:21:26.452 "impl_name": "posix" 00:21:26.452 } 00:21:26.452 }, 00:21:26.452 { 00:21:26.453 "method": "sock_impl_set_options", 00:21:26.453 "params": { 00:21:26.453 "impl_name": "ssl", 00:21:26.453 "recv_buf_size": 4096, 00:21:26.453 "send_buf_size": 4096, 00:21:26.453 "enable_recv_pipe": true, 00:21:26.453 "enable_quickack": false, 00:21:26.453 "enable_placement_id": 0, 00:21:26.453 "enable_zerocopy_send_server": true, 00:21:26.453 "enable_zerocopy_send_client": false, 00:21:26.453 "zerocopy_threshold": 0, 00:21:26.453 "tls_version": 0, 00:21:26.453 "enable_ktls": false 00:21:26.453 } 00:21:26.453 }, 00:21:26.453 { 00:21:26.453 "method": "sock_impl_set_options", 00:21:26.453 "params": { 00:21:26.453 "impl_name": "posix", 00:21:26.453 "recv_buf_size": 2097152, 00:21:26.453 "send_buf_size": 2097152, 00:21:26.453 "enable_recv_pipe": true, 00:21:26.453 "enable_quickack": false, 00:21:26.453 "enable_placement_id": 0, 00:21:26.453 "enable_zerocopy_send_server": true, 00:21:26.453 "enable_zerocopy_send_client": false, 00:21:26.453 "zerocopy_threshold": 0, 00:21:26.453 "tls_version": 0, 00:21:26.453 "enable_ktls": false 00:21:26.453 } 00:21:26.453 } 00:21:26.453 ] 00:21:26.453 }, 00:21:26.453 { 00:21:26.453 "subsystem": "vmd", 00:21:26.453 "config": [] 00:21:26.453 }, 00:21:26.453 { 00:21:26.453 "subsystem": "accel", 00:21:26.453 "config": [ 00:21:26.453 { 00:21:26.453 "method": "accel_set_options", 00:21:26.453 "params": { 00:21:26.453 "small_cache_size": 128, 00:21:26.453 "large_cache_size": 16, 00:21:26.453 "task_count": 2048, 00:21:26.453 "sequence_count": 2048, 00:21:26.453 "buf_count": 2048 00:21:26.453 } 00:21:26.453 } 00:21:26.453 ] 00:21:26.453 }, 00:21:26.453 { 00:21:26.453 "subsystem": "bdev", 00:21:26.453 "config": [ 00:21:26.453 { 00:21:26.453 "method": "bdev_set_options", 00:21:26.453 "params": { 00:21:26.453 "bdev_io_pool_size": 65535, 00:21:26.453 "bdev_io_cache_size": 256, 00:21:26.453 "bdev_auto_examine": true, 00:21:26.453 "iobuf_small_cache_size": 128, 00:21:26.453 "iobuf_large_cache_size": 16 00:21:26.453 } 00:21:26.453 }, 00:21:26.453 { 00:21:26.453 "method": "bdev_raid_set_options", 00:21:26.453 "params": { 00:21:26.453 "process_window_size_kb": 1024 00:21:26.453 } 00:21:26.453 }, 00:21:26.453 { 00:21:26.453 "method": "bdev_iscsi_set_options", 00:21:26.453 "params": { 00:21:26.453 "timeout_sec": 30 00:21:26.453 } 00:21:26.453 }, 00:21:26.453 { 00:21:26.453 "method": "bdev_nvme_set_options", 00:21:26.453 "params": { 00:21:26.453 "action_on_timeout": "none", 00:21:26.453 "timeout_us": 0, 00:21:26.453 "timeout_admin_us": 0, 00:21:26.453 "keep_alive_timeout_ms": 10000, 00:21:26.453 "arbitration_burst": 0, 00:21:26.453 "low_priority_weight": 0, 00:21:26.453 "medium_priority_weight": 0, 00:21:26.453 "high_priority_weight": 0, 00:21:26.453 "nvme_adminq_poll_period_us": 10000, 00:21:26.453 "nvme_ioq_poll_period_us": 0, 00:21:26.453 "io_queue_requests": 512, 00:21:26.453 "delay_cmd_submit": true, 00:21:26.453 "transport_retry_count": 4, 00:21:26.453 "bdev_retry_count": 3, 00:21:26.453 "transport_ack_timeout": 0, 00:21:26.453 "ctrlr_loss_timeout_sec": 0, 00:21:26.453 "reconnect_delay_sec": 0, 00:21:26.453 "fast_io_fail_timeout_sec": 0, 00:21:26.453 "disable_auto_failback": false, 00:21:26.453 "generate_uuids": false, 00:21:26.453 "transport_tos": 0, 00:21:26.453 "nvme_error_stat": false, 00:21:26.453 "rdma_srq_size": 0, 00:21:26.453 "io_path_stat": false, 00:21:26.453 "allow_accel_sequence": false, 00:21:26.453 "rdma_max_cq_size": 0, 00:21:26.453 "rdma_cm_event_timeout_ms": 0, 00:21:26.453 "dhchap_digests": [ 00:21:26.453 "sha256", 00:21:26.453 "sha384", 00:21:26.453 "sha512" 00:21:26.453 ], 00:21:26.453 "dhchap_dhgroups": [ 00:21:26.453 "null", 00:21:26.453 "ffdhe2048", 00:21:26.453 "ffdhe3072", 00:21:26.453 "ffdhe4096", 00:21:26.453 "ffdhe6144", 00:21:26.453 "ffdhe8192" 00:21:26.453 ] 00:21:26.453 } 00:21:26.453 }, 00:21:26.453 { 00:21:26.453 "method": "bdev_nvme_attach_controller", 00:21:26.453 "params": { 00:21:26.453 "name": "nvme0", 00:21:26.453 "trtype": "TCP", 00:21:26.453 "adrfam": "IPv4", 00:21:26.453 "traddr": "10.0.0.2", 00:21:26.453 "trsvcid": "4420", 00:21:26.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:26.453 "prchk_reftag": false, 00:21:26.453 "prchk_guard": false, 00:21:26.453 "ctrlr_loss_timeout_sec": 0, 00:21:26.453 "reconnect_delay_sec": 0, 00:21:26.453 "fast_io_fail_timeout_sec": 0, 00:21:26.453 "psk": "key0", 00:21:26.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:26.453 "hdgst": false, 00:21:26.453 "ddgst": false 00:21:26.453 } 00:21:26.453 }, 00:21:26.453 { 00:21:26.453 "method": "bdev_nvme_set_hotplug", 00:21:26.453 "params": { 00:21:26.453 "period_us": 100000, 00:21:26.453 "enable": false 00:21:26.453 } 00:21:26.453 }, 00:21:26.453 { 00:21:26.453 "method": "bdev_enable_histogram", 00:21:26.453 "params": { 00:21:26.453 "name": "nvme0n1", 00:21:26.453 "enable": true 00:21:26.453 } 00:21:26.453 }, 00:21:26.453 { 00:21:26.453 "method": "bdev_wait_for_examine" 00:21:26.453 } 00:21:26.453 ] 00:21:26.453 }, 00:21:26.453 { 00:21:26.453 "subsystem": "nbd", 00:21:26.453 "config": [] 00:21:26.453 } 00:21:26.453 ] 00:21:26.453 }' 00:21:26.714 [2024-07-15 13:51:53.003175] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:26.714 [2024-07-15 13:51:53.003229] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133175 ] 00:21:26.714 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.714 [2024-07-15 13:51:53.078720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.714 [2024-07-15 13:51:53.132567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.979 [2024-07-15 13:51:53.265990] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.239 13:51:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.239 13:51:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:27.499 13:51:53 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:27.499 13:51:53 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:27.499 13:51:53 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.499 13:51:53 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:27.499 Running I/O for 1 seconds... 00:21:28.883 00:21:28.884 Latency(us) 00:21:28.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.884 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:28.884 Verification LBA range: start 0x0 length 0x2000 00:21:28.884 nvme0n1 : 1.07 2116.17 8.27 0.00 0.00 58817.54 4724.05 66409.81 00:21:28.884 =================================================================================================================== 00:21:28.884 Total : 2116.17 8.27 0.00 0.00 58817.54 4724.05 66409.81 00:21:28.884 0 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:28.884 nvmf_trace.0 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1133175 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1133175 ']' 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1133175 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1133175 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1133175' 00:21:28.884 killing process with pid 1133175 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1133175 00:21:28.884 Received shutdown signal, test time was about 1.000000 seconds 00:21:28.884 00:21:28.884 Latency(us) 00:21:28.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.884 =================================================================================================================== 00:21:28.884 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1133175 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:28.884 rmmod nvme_tcp 00:21:28.884 rmmod nvme_fabrics 00:21:28.884 rmmod nvme_keyring 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1132910 ']' 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1132910 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1132910 ']' 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1132910 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.884 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1132910 00:21:29.144 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:29.144 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:29.144 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1132910' 00:21:29.144 killing process with pid 1132910 00:21:29.144 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1132910 00:21:29.144 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1132910 00:21:29.144 13:51:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:29.144 13:51:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:29.144 13:51:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:29.145 13:51:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:29.145 13:51:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:29.145 13:51:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.145 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:29.145 13:51:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.689 13:51:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:31.689 13:51:57 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.lRKcInaeAw /tmp/tmp.4VatYu3R69 /tmp/tmp.9e5d9DS3Pe 00:21:31.689 00:21:31.689 real 1m23.056s 00:21:31.689 user 2m5.166s 00:21:31.689 sys 0m29.163s 00:21:31.689 13:51:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:31.689 13:51:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.689 ************************************ 00:21:31.689 END TEST nvmf_tls 00:21:31.689 ************************************ 00:21:31.689 13:51:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:31.689 13:51:57 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:31.689 13:51:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:31.689 13:51:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:31.689 13:51:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:31.689 ************************************ 00:21:31.689 START TEST nvmf_fips 00:21:31.689 ************************************ 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:31.689 * Looking for test storage... 00:21:31.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:31.689 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:31.690 13:51:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:31.690 Error setting digest 00:21:31.690 00C21829A67F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:31.690 00C21829A67F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:31.690 13:51:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:38.276 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:38.276 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:38.276 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:38.276 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:38.276 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:38.276 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:38.276 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:38.276 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:38.276 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:38.276 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:38.276 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:38.276 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:38.276 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:38.276 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:38.276 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:38.276 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.276 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.276 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:38.277 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:38.277 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:38.277 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:38.277 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:38.277 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:38.538 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:38.538 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:38.538 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:38.538 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:38.538 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:38.538 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:38.538 13:52:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:38.538 13:52:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:38.538 13:52:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:38.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:21:38.799 00:21:38.799 --- 10.0.0.2 ping statistics --- 00:21:38.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.799 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:38.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:21:38.799 00:21:38.799 --- 10.0.0.1 ping statistics --- 00:21:38.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.799 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1137774 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1137774 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1137774 ']' 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.799 13:52:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:38.799 [2024-07-15 13:52:05.217113] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:38.799 [2024-07-15 13:52:05.217194] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.799 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.799 [2024-07-15 13:52:05.306452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.060 [2024-07-15 13:52:05.396514] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.060 [2024-07-15 13:52:05.396561] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.060 [2024-07-15 13:52:05.396569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.060 [2024-07-15 13:52:05.396576] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.060 [2024-07-15 13:52:05.396582] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.060 [2024-07-15 13:52:05.396612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.706 13:52:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.706 13:52:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:39.706 13:52:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:39.706 13:52:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:39.706 13:52:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:39.706 13:52:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.706 13:52:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:39.706 13:52:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:39.706 13:52:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:39.706 13:52:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:39.706 13:52:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:39.706 13:52:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:39.706 13:52:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:39.706 13:52:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:39.706 [2024-07-15 13:52:06.177678] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.706 [2024-07-15 13:52:06.193667] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:39.706 [2024-07-15 13:52:06.193929] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.706 [2024-07-15 13:52:06.223793] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:39.706 malloc0 00:21:39.996 13:52:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:39.996 13:52:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1137989 00:21:39.996 13:52:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1137989 /var/tmp/bdevperf.sock 00:21:39.996 13:52:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1137989 ']' 00:21:39.996 13:52:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.996 13:52:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:39.996 13:52:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.996 13:52:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:39.996 13:52:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:39.996 13:52:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:39.996 [2024-07-15 13:52:06.324479] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:39.996 [2024-07-15 13:52:06.324554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137989 ] 00:21:39.996 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.996 [2024-07-15 13:52:06.379451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.996 [2024-07-15 13:52:06.443767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.565 13:52:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:40.565 13:52:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:40.565 13:52:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:40.826 [2024-07-15 13:52:07.195326] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:40.826 [2024-07-15 13:52:07.195388] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:40.826 TLSTESTn1 00:21:40.826 13:52:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:41.086 Running I/O for 10 seconds... 00:21:51.085 00:21:51.085 Latency(us) 00:21:51.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.085 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:51.085 Verification LBA range: start 0x0 length 0x2000 00:21:51.085 TLSTESTn1 : 10.06 2605.78 10.18 0.00 0.00 48965.48 4751.36 124955.31 00:21:51.085 =================================================================================================================== 00:21:51.085 Total : 2605.78 10.18 0.00 0.00 48965.48 4751.36 124955.31 00:21:51.085 0 00:21:51.085 13:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:51.085 13:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:51.085 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:21:51.085 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:21:51.085 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:51.085 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:51.085 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:51.085 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:51.085 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:51.085 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:51.085 nvmf_trace.0 00:21:51.085 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:21:51.085 13:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1137989 00:21:51.085 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1137989 ']' 00:21:51.085 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1137989 00:21:51.085 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:51.085 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:51.085 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1137989 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1137989' 00:21:51.346 killing process with pid 1137989 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1137989 00:21:51.346 Received shutdown signal, test time was about 10.000000 seconds 00:21:51.346 00:21:51.346 Latency(us) 00:21:51.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.346 =================================================================================================================== 00:21:51.346 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:51.346 [2024-07-15 13:52:17.635861] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1137989 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:51.346 rmmod nvme_tcp 00:21:51.346 rmmod nvme_fabrics 00:21:51.346 rmmod nvme_keyring 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1137774 ']' 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1137774 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1137774 ']' 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1137774 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1137774 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1137774' 00:21:51.346 killing process with pid 1137774 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1137774 00:21:51.346 [2024-07-15 13:52:17.863858] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:51.346 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1137774 00:21:51.607 13:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:51.607 13:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:51.607 13:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:51.607 13:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:51.607 13:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:51.607 13:52:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.607 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.607 13:52:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.150 13:52:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:54.150 13:52:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:54.150 00:21:54.150 real 0m22.313s 00:21:54.150 user 0m22.964s 00:21:54.150 sys 0m9.997s 00:21:54.150 13:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:54.150 13:52:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:54.150 ************************************ 00:21:54.150 END TEST nvmf_fips 00:21:54.150 ************************************ 00:21:54.150 13:52:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:54.150 13:52:20 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:54.150 13:52:20 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:54.150 13:52:20 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:54.150 13:52:20 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:54.150 13:52:20 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:54.150 13:52:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:00.794 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:00.794 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:00.794 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:00.794 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:22:00.794 13:52:26 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:00.794 13:52:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:00.794 13:52:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:00.794 13:52:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:00.794 ************************************ 00:22:00.794 START TEST nvmf_perf_adq 00:22:00.794 ************************************ 00:22:00.794 13:52:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:00.794 * Looking for test storage... 00:22:00.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.794 13:52:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.795 13:52:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.795 13:52:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.795 13:52:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:00.795 13:52:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.795 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:00.795 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:00.795 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:00.795 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.795 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.795 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.795 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:00.795 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:00.795 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:00.795 13:52:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:00.795 13:52:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:00.795 13:52:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:08.941 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:08.941 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:08.941 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:08.941 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:08.941 13:52:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:09.202 13:52:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:11.113 13:52:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:16.441 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:16.442 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:16.442 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:16.442 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:16.442 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:16.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:22:16.442 00:22:16.442 --- 10.0.0.2 ping statistics --- 00:22:16.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.442 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:16.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:22:16.442 00:22:16.442 --- 10.0.0.1 ping statistics --- 00:22:16.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.442 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1149867 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1149867 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1149867 ']' 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.442 13:52:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.703 [2024-07-15 13:52:42.984163] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:16.703 [2024-07-15 13:52:42.984232] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.703 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.703 [2024-07-15 13:52:43.056327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:16.703 [2024-07-15 13:52:43.135989] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.703 [2024-07-15 13:52:43.136026] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.703 [2024-07-15 13:52:43.136034] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.703 [2024-07-15 13:52:43.136040] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.703 [2024-07-15 13:52:43.136046] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.703 [2024-07-15 13:52:43.136185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.703 [2024-07-15 13:52:43.136301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.703 [2024-07-15 13:52:43.136455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.703 [2024-07-15 13:52:43.136456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.274 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.274 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:17.274 13:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.274 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:17.274 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.535 [2024-07-15 13:52:43.949320] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.535 Malloc1 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.535 13:52:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.535 13:52:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.535 13:52:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.535 13:52:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.535 [2024-07-15 13:52:44.008671] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.535 13:52:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.535 13:52:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1150188 00:22:17.535 13:52:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:17.535 13:52:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:17.535 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.077 13:52:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:20.077 13:52:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.077 13:52:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:20.077 13:52:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.077 13:52:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:20.077 "tick_rate": 2400000000, 00:22:20.077 "poll_groups": [ 00:22:20.077 { 00:22:20.077 "name": "nvmf_tgt_poll_group_000", 00:22:20.077 "admin_qpairs": 1, 00:22:20.077 "io_qpairs": 1, 00:22:20.077 "current_admin_qpairs": 1, 00:22:20.078 "current_io_qpairs": 1, 00:22:20.078 "pending_bdev_io": 0, 00:22:20.078 "completed_nvme_io": 18721, 00:22:20.078 "transports": [ 00:22:20.078 { 00:22:20.078 "trtype": "TCP" 00:22:20.078 } 00:22:20.078 ] 00:22:20.078 }, 00:22:20.078 { 00:22:20.078 "name": "nvmf_tgt_poll_group_001", 00:22:20.078 "admin_qpairs": 0, 00:22:20.078 "io_qpairs": 1, 00:22:20.078 "current_admin_qpairs": 0, 00:22:20.078 "current_io_qpairs": 1, 00:22:20.078 "pending_bdev_io": 0, 00:22:20.078 "completed_nvme_io": 26528, 00:22:20.078 "transports": [ 00:22:20.078 { 00:22:20.078 "trtype": "TCP" 00:22:20.078 } 00:22:20.078 ] 00:22:20.078 }, 00:22:20.078 { 00:22:20.078 "name": "nvmf_tgt_poll_group_002", 00:22:20.078 "admin_qpairs": 0, 00:22:20.078 "io_qpairs": 1, 00:22:20.078 "current_admin_qpairs": 0, 00:22:20.078 "current_io_qpairs": 1, 00:22:20.078 "pending_bdev_io": 0, 00:22:20.078 "completed_nvme_io": 19905, 00:22:20.078 "transports": [ 00:22:20.078 { 00:22:20.078 "trtype": "TCP" 00:22:20.078 } 00:22:20.078 ] 00:22:20.078 }, 00:22:20.078 { 00:22:20.078 "name": "nvmf_tgt_poll_group_003", 00:22:20.078 "admin_qpairs": 0, 00:22:20.078 "io_qpairs": 1, 00:22:20.078 "current_admin_qpairs": 0, 00:22:20.078 "current_io_qpairs": 1, 00:22:20.078 "pending_bdev_io": 0, 00:22:20.078 "completed_nvme_io": 19016, 00:22:20.078 "transports": [ 00:22:20.078 { 00:22:20.078 "trtype": "TCP" 00:22:20.078 } 00:22:20.078 ] 00:22:20.078 } 00:22:20.078 ] 00:22:20.078 }' 00:22:20.078 13:52:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:20.078 13:52:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:20.078 13:52:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:20.078 13:52:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:20.078 13:52:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1150188 00:22:28.212 Initializing NVMe Controllers 00:22:28.212 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:28.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:28.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:28.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:28.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:28.212 Initialization complete. Launching workers. 00:22:28.212 ======================================================== 00:22:28.212 Latency(us) 00:22:28.212 Device Information : IOPS MiB/s Average min max 00:22:28.212 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11256.20 43.97 5687.00 1435.03 10246.96 00:22:28.212 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14589.10 56.99 4386.72 1260.56 9463.40 00:22:28.212 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13895.50 54.28 4605.24 1520.82 8730.31 00:22:28.212 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14032.20 54.81 4561.17 1456.48 11188.49 00:22:28.212 ======================================================== 00:22:28.212 Total : 53772.99 210.05 4760.89 1260.56 11188.49 00:22:28.212 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:28.212 rmmod nvme_tcp 00:22:28.212 rmmod nvme_fabrics 00:22:28.212 rmmod nvme_keyring 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1149867 ']' 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1149867 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1149867 ']' 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1149867 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1149867 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1149867' 00:22:28.212 killing process with pid 1149867 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1149867 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1149867 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.212 13:52:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.124 13:52:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:30.384 13:52:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:30.384 13:52:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:31.766 13:52:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:33.677 13:53:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:38.965 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:38.965 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:38.965 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:38.965 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.965 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:38.966 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:38.966 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.966 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.966 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.966 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.966 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:38.966 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.966 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.966 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.227 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:39.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:22:39.227 00:22:39.227 --- 10.0.0.2 ping statistics --- 00:22:39.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.227 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:22:39.227 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:22:39.227 00:22:39.227 --- 10.0.0.1 ping statistics --- 00:22:39.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.227 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:22:39.227 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.227 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:39.227 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:39.227 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.227 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:39.227 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:39.227 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.227 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:39.227 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:39.227 13:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:39.227 13:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:39.227 13:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:39.227 13:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:39.227 net.core.busy_poll = 1 00:22:39.227 13:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:39.227 net.core.busy_read = 1 00:22:39.227 13:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:39.227 13:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:39.227 13:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:39.488 13:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:39.488 13:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:39.488 13:53:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:39.488 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:39.488 13:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:39.488 13:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.488 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1154693 00:22:39.488 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1154693 00:22:39.488 13:53:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:39.488 13:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1154693 ']' 00:22:39.488 13:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.488 13:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.488 13:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.488 13:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.488 13:53:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.488 [2024-07-15 13:53:05.882776] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:39.488 [2024-07-15 13:53:05.882827] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.488 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.488 [2024-07-15 13:53:05.952253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:39.748 [2024-07-15 13:53:06.017881] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.748 [2024-07-15 13:53:06.017917] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.748 [2024-07-15 13:53:06.017925] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.748 [2024-07-15 13:53:06.017931] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.748 [2024-07-15 13:53:06.017937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.748 [2024-07-15 13:53:06.018075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.748 [2024-07-15 13:53:06.018204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.748 [2024-07-15 13:53:06.018309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.748 [2024-07-15 13:53:06.018310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.319 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.319 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:40.319 13:53:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:40.319 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:40.319 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.319 13:53:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.319 13:53:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:40.319 13:53:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:40.319 13:53:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:40.319 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.319 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.319 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.319 13:53:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:40.319 13:53:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:40.319 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.319 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.319 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.320 13:53:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:40.320 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.320 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.320 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.320 13:53:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:40.320 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.320 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.320 [2024-07-15 13:53:06.820481] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.320 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.320 13:53:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:40.320 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.320 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.580 Malloc1 00:22:40.580 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.580 13:53:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:40.580 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.580 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.580 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.580 13:53:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:40.580 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.580 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.580 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.580 13:53:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.580 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.580 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.580 [2024-07-15 13:53:06.879890] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.580 13:53:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.580 13:53:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1155045 00:22:40.580 13:53:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:40.580 13:53:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:40.580 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.490 13:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:42.490 13:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.490 13:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:42.490 13:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.490 13:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:42.490 "tick_rate": 2400000000, 00:22:42.490 "poll_groups": [ 00:22:42.490 { 00:22:42.490 "name": "nvmf_tgt_poll_group_000", 00:22:42.490 "admin_qpairs": 1, 00:22:42.490 "io_qpairs": 1, 00:22:42.490 "current_admin_qpairs": 1, 00:22:42.490 "current_io_qpairs": 1, 00:22:42.490 "pending_bdev_io": 0, 00:22:42.490 "completed_nvme_io": 26546, 00:22:42.490 "transports": [ 00:22:42.490 { 00:22:42.490 "trtype": "TCP" 00:22:42.490 } 00:22:42.490 ] 00:22:42.490 }, 00:22:42.490 { 00:22:42.490 "name": "nvmf_tgt_poll_group_001", 00:22:42.490 "admin_qpairs": 0, 00:22:42.490 "io_qpairs": 3, 00:22:42.490 "current_admin_qpairs": 0, 00:22:42.490 "current_io_qpairs": 3, 00:22:42.490 "pending_bdev_io": 0, 00:22:42.490 "completed_nvme_io": 41902, 00:22:42.490 "transports": [ 00:22:42.490 { 00:22:42.490 "trtype": "TCP" 00:22:42.490 } 00:22:42.490 ] 00:22:42.490 }, 00:22:42.490 { 00:22:42.490 "name": "nvmf_tgt_poll_group_002", 00:22:42.490 "admin_qpairs": 0, 00:22:42.490 "io_qpairs": 0, 00:22:42.490 "current_admin_qpairs": 0, 00:22:42.490 "current_io_qpairs": 0, 00:22:42.490 "pending_bdev_io": 0, 00:22:42.490 "completed_nvme_io": 0, 00:22:42.490 "transports": [ 00:22:42.490 { 00:22:42.490 "trtype": "TCP" 00:22:42.490 } 00:22:42.490 ] 00:22:42.490 }, 00:22:42.490 { 00:22:42.490 "name": "nvmf_tgt_poll_group_003", 00:22:42.490 "admin_qpairs": 0, 00:22:42.490 "io_qpairs": 0, 00:22:42.490 "current_admin_qpairs": 0, 00:22:42.490 "current_io_qpairs": 0, 00:22:42.490 "pending_bdev_io": 0, 00:22:42.490 "completed_nvme_io": 0, 00:22:42.490 "transports": [ 00:22:42.490 { 00:22:42.490 "trtype": "TCP" 00:22:42.490 } 00:22:42.490 ] 00:22:42.490 } 00:22:42.490 ] 00:22:42.490 }' 00:22:42.490 13:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:42.490 13:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:42.490 13:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:42.490 13:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:42.490 13:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1155045 00:22:50.629 Initializing NVMe Controllers 00:22:50.629 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:50.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:50.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:50.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:50.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:50.629 Initialization complete. Launching workers. 00:22:50.629 ======================================================== 00:22:50.629 Latency(us) 00:22:50.629 Device Information : IOPS MiB/s Average min max 00:22:50.629 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6642.60 25.95 9635.58 1386.45 54302.17 00:22:50.629 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 17991.20 70.28 3556.97 1154.14 9215.04 00:22:50.629 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6686.60 26.12 9582.41 1423.17 54345.39 00:22:50.629 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8511.90 33.25 7517.82 1534.95 53534.33 00:22:50.629 ======================================================== 00:22:50.629 Total : 39832.30 155.59 6428.56 1154.14 54345.39 00:22:50.629 00:22:50.629 13:53:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:50.629 13:53:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:50.629 13:53:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:50.629 13:53:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.629 13:53:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:50.629 13:53:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.629 13:53:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.629 rmmod nvme_tcp 00:22:50.629 rmmod nvme_fabrics 00:22:50.629 rmmod nvme_keyring 00:22:50.629 13:53:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.629 13:53:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:50.629 13:53:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:50.629 13:53:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1154693 ']' 00:22:50.629 13:53:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1154693 00:22:50.629 13:53:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1154693 ']' 00:22:50.629 13:53:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1154693 00:22:50.629 13:53:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:50.629 13:53:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:50.629 13:53:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1154693 00:22:50.889 13:53:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:50.889 13:53:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:50.889 13:53:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1154693' 00:22:50.889 killing process with pid 1154693 00:22:50.889 13:53:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1154693 00:22:50.889 13:53:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1154693 00:22:50.889 13:53:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:50.889 13:53:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:50.889 13:53:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:50.889 13:53:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:50.889 13:53:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:50.889 13:53:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.889 13:53:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.889 13:53:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.435 13:53:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:53.435 13:53:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:53.435 00:22:53.435 real 0m52.421s 00:22:53.435 user 2m45.332s 00:22:53.435 sys 0m12.777s 00:22:53.435 13:53:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:53.435 13:53:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.435 ************************************ 00:22:53.435 END TEST nvmf_perf_adq 00:22:53.435 ************************************ 00:22:53.435 13:53:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:53.435 13:53:19 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:53.435 13:53:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:53.435 13:53:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.435 13:53:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:53.435 ************************************ 00:22:53.435 START TEST nvmf_shutdown 00:22:53.435 ************************************ 00:22:53.435 13:53:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:53.435 * Looking for test storage... 00:22:53.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:53.435 13:53:19 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.435 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:53.435 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:53.436 ************************************ 00:22:53.436 START TEST nvmf_shutdown_tc1 00:22:53.436 ************************************ 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:53.436 13:53:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.026 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.026 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:00.026 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:00.026 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:00.026 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:00.026 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:00.026 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:00.026 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:00.026 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:00.026 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:00.026 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:00.026 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:00.026 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:00.026 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:00.026 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:00.027 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:00.027 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:00.027 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:00.027 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.027 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.289 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.289 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.289 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:00.289 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.289 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.289 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.289 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:00.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:23:00.289 00:23:00.289 --- 10.0.0.2 ping statistics --- 00:23:00.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.289 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:23:00.289 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.382 ms 00:23:00.289 00:23:00.289 --- 10.0.0.1 ping statistics --- 00:23:00.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.289 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:23:00.289 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.289 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:00.289 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:00.289 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.289 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:00.289 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:00.289 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.289 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:00.289 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:00.550 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:00.550 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.550 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:00.550 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.550 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1161174 00:23:00.550 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1161174 00:23:00.550 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:00.550 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1161174 ']' 00:23:00.550 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.550 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.550 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.550 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.550 13:53:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.550 [2024-07-15 13:53:26.881938] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:00.550 [2024-07-15 13:53:26.882004] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.550 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.550 [2024-07-15 13:53:26.971799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:00.550 [2024-07-15 13:53:27.067285] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.550 [2024-07-15 13:53:27.067343] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.550 [2024-07-15 13:53:27.067351] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.550 [2024-07-15 13:53:27.067358] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.550 [2024-07-15 13:53:27.067364] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.550 [2024-07-15 13:53:27.067527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.550 [2024-07-15 13:53:27.067685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.550 [2024-07-15 13:53:27.067851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.550 [2024-07-15 13:53:27.067852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:01.491 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.491 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:01.491 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.491 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.491 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.491 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.491 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:01.491 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.491 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.491 [2024-07-15 13:53:27.718688] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.491 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.491 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.492 13:53:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.492 Malloc1 00:23:01.492 [2024-07-15 13:53:27.822181] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.492 Malloc2 00:23:01.492 Malloc3 00:23:01.492 Malloc4 00:23:01.492 Malloc5 00:23:01.492 Malloc6 00:23:01.752 Malloc7 00:23:01.752 Malloc8 00:23:01.752 Malloc9 00:23:01.752 Malloc10 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1161550 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1161550 /var/tmp/bdevperf.sock 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1161550 ']' 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.752 { 00:23:01.752 "params": { 00:23:01.752 "name": "Nvme$subsystem", 00:23:01.752 "trtype": "$TEST_TRANSPORT", 00:23:01.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.752 "adrfam": "ipv4", 00:23:01.752 "trsvcid": "$NVMF_PORT", 00:23:01.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.752 "hdgst": ${hdgst:-false}, 00:23:01.752 "ddgst": ${ddgst:-false} 00:23:01.752 }, 00:23:01.752 "method": "bdev_nvme_attach_controller" 00:23:01.752 } 00:23:01.752 EOF 00:23:01.752 )") 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.752 { 00:23:01.752 "params": { 00:23:01.752 "name": "Nvme$subsystem", 00:23:01.752 "trtype": "$TEST_TRANSPORT", 00:23:01.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.752 "adrfam": "ipv4", 00:23:01.752 "trsvcid": "$NVMF_PORT", 00:23:01.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.752 "hdgst": ${hdgst:-false}, 00:23:01.752 "ddgst": ${ddgst:-false} 00:23:01.752 }, 00:23:01.752 "method": "bdev_nvme_attach_controller" 00:23:01.752 } 00:23:01.752 EOF 00:23:01.752 )") 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.752 { 00:23:01.752 "params": { 00:23:01.752 "name": "Nvme$subsystem", 00:23:01.752 "trtype": "$TEST_TRANSPORT", 00:23:01.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.752 "adrfam": "ipv4", 00:23:01.752 "trsvcid": "$NVMF_PORT", 00:23:01.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.752 "hdgst": ${hdgst:-false}, 00:23:01.752 "ddgst": ${ddgst:-false} 00:23:01.752 }, 00:23:01.752 "method": "bdev_nvme_attach_controller" 00:23:01.752 } 00:23:01.752 EOF 00:23:01.752 )") 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.752 { 00:23:01.752 "params": { 00:23:01.752 "name": "Nvme$subsystem", 00:23:01.752 "trtype": "$TEST_TRANSPORT", 00:23:01.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.752 "adrfam": "ipv4", 00:23:01.752 "trsvcid": "$NVMF_PORT", 00:23:01.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.752 "hdgst": ${hdgst:-false}, 00:23:01.752 "ddgst": ${ddgst:-false} 00:23:01.752 }, 00:23:01.752 "method": "bdev_nvme_attach_controller" 00:23:01.752 } 00:23:01.752 EOF 00:23:01.752 )") 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.752 { 00:23:01.752 "params": { 00:23:01.752 "name": "Nvme$subsystem", 00:23:01.752 "trtype": "$TEST_TRANSPORT", 00:23:01.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.752 "adrfam": "ipv4", 00:23:01.752 "trsvcid": "$NVMF_PORT", 00:23:01.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.752 "hdgst": ${hdgst:-false}, 00:23:01.752 "ddgst": ${ddgst:-false} 00:23:01.752 }, 00:23:01.752 "method": "bdev_nvme_attach_controller" 00:23:01.752 } 00:23:01.752 EOF 00:23:01.752 )") 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.752 { 00:23:01.752 "params": { 00:23:01.752 "name": "Nvme$subsystem", 00:23:01.752 "trtype": "$TEST_TRANSPORT", 00:23:01.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.752 "adrfam": "ipv4", 00:23:01.752 "trsvcid": "$NVMF_PORT", 00:23:01.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.752 "hdgst": ${hdgst:-false}, 00:23:01.752 "ddgst": ${ddgst:-false} 00:23:01.752 }, 00:23:01.752 "method": "bdev_nvme_attach_controller" 00:23:01.752 } 00:23:01.752 EOF 00:23:01.752 )") 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.752 { 00:23:01.752 "params": { 00:23:01.752 "name": "Nvme$subsystem", 00:23:01.752 "trtype": "$TEST_TRANSPORT", 00:23:01.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.752 "adrfam": "ipv4", 00:23:01.752 "trsvcid": "$NVMF_PORT", 00:23:01.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.752 "hdgst": ${hdgst:-false}, 00:23:01.752 "ddgst": ${ddgst:-false} 00:23:01.752 }, 00:23:01.752 "method": "bdev_nvme_attach_controller" 00:23:01.752 } 00:23:01.752 EOF 00:23:01.752 )") 00:23:01.752 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.752 [2024-07-15 13:53:28.277280] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:02.013 [2024-07-15 13:53:28.277386] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:02.013 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.013 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.013 { 00:23:02.013 "params": { 00:23:02.013 "name": "Nvme$subsystem", 00:23:02.013 "trtype": "$TEST_TRANSPORT", 00:23:02.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.013 "adrfam": "ipv4", 00:23:02.013 "trsvcid": "$NVMF_PORT", 00:23:02.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.013 "hdgst": ${hdgst:-false}, 00:23:02.013 "ddgst": ${ddgst:-false} 00:23:02.013 }, 00:23:02.013 "method": "bdev_nvme_attach_controller" 00:23:02.013 } 00:23:02.013 EOF 00:23:02.013 )") 00:23:02.013 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:02.013 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.013 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.013 { 00:23:02.013 "params": { 00:23:02.013 "name": "Nvme$subsystem", 00:23:02.013 "trtype": "$TEST_TRANSPORT", 00:23:02.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.013 "adrfam": "ipv4", 00:23:02.013 "trsvcid": "$NVMF_PORT", 00:23:02.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.013 "hdgst": ${hdgst:-false}, 00:23:02.013 "ddgst": ${ddgst:-false} 00:23:02.013 }, 00:23:02.013 "method": "bdev_nvme_attach_controller" 00:23:02.013 } 00:23:02.013 EOF 00:23:02.013 )") 00:23:02.013 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:02.013 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.013 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.013 { 00:23:02.013 "params": { 00:23:02.013 "name": "Nvme$subsystem", 00:23:02.013 "trtype": "$TEST_TRANSPORT", 00:23:02.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.013 "adrfam": "ipv4", 00:23:02.013 "trsvcid": "$NVMF_PORT", 00:23:02.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.013 "hdgst": ${hdgst:-false}, 00:23:02.013 "ddgst": ${ddgst:-false} 00:23:02.013 }, 00:23:02.013 "method": "bdev_nvme_attach_controller" 00:23:02.013 } 00:23:02.013 EOF 00:23:02.013 )") 00:23:02.013 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:02.013 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:02.013 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:02.013 13:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:02.013 "params": { 00:23:02.013 "name": "Nvme1", 00:23:02.013 "trtype": "tcp", 00:23:02.013 "traddr": "10.0.0.2", 00:23:02.013 "adrfam": "ipv4", 00:23:02.013 "trsvcid": "4420", 00:23:02.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:02.013 "hdgst": false, 00:23:02.013 "ddgst": false 00:23:02.013 }, 00:23:02.013 "method": "bdev_nvme_attach_controller" 00:23:02.013 },{ 00:23:02.013 "params": { 00:23:02.013 "name": "Nvme2", 00:23:02.013 "trtype": "tcp", 00:23:02.013 "traddr": "10.0.0.2", 00:23:02.013 "adrfam": "ipv4", 00:23:02.013 "trsvcid": "4420", 00:23:02.013 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:02.013 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:02.013 "hdgst": false, 00:23:02.013 "ddgst": false 00:23:02.013 }, 00:23:02.014 "method": "bdev_nvme_attach_controller" 00:23:02.014 },{ 00:23:02.014 "params": { 00:23:02.014 "name": "Nvme3", 00:23:02.014 "trtype": "tcp", 00:23:02.014 "traddr": "10.0.0.2", 00:23:02.014 "adrfam": "ipv4", 00:23:02.014 "trsvcid": "4420", 00:23:02.014 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:02.014 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:02.014 "hdgst": false, 00:23:02.014 "ddgst": false 00:23:02.014 }, 00:23:02.014 "method": "bdev_nvme_attach_controller" 00:23:02.014 },{ 00:23:02.014 "params": { 00:23:02.014 "name": "Nvme4", 00:23:02.014 "trtype": "tcp", 00:23:02.014 "traddr": "10.0.0.2", 00:23:02.014 "adrfam": "ipv4", 00:23:02.014 "trsvcid": "4420", 00:23:02.014 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:02.014 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:02.014 "hdgst": false, 00:23:02.014 "ddgst": false 00:23:02.014 }, 00:23:02.014 "method": "bdev_nvme_attach_controller" 00:23:02.014 },{ 00:23:02.014 "params": { 00:23:02.014 "name": "Nvme5", 00:23:02.014 "trtype": "tcp", 00:23:02.014 "traddr": "10.0.0.2", 00:23:02.014 "adrfam": "ipv4", 00:23:02.014 "trsvcid": "4420", 00:23:02.014 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:02.014 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:02.014 "hdgst": false, 00:23:02.014 "ddgst": false 00:23:02.014 }, 00:23:02.014 "method": "bdev_nvme_attach_controller" 00:23:02.014 },{ 00:23:02.014 "params": { 00:23:02.014 "name": "Nvme6", 00:23:02.014 "trtype": "tcp", 00:23:02.014 "traddr": "10.0.0.2", 00:23:02.014 "adrfam": "ipv4", 00:23:02.014 "trsvcid": "4420", 00:23:02.014 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:02.014 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:02.014 "hdgst": false, 00:23:02.014 "ddgst": false 00:23:02.014 }, 00:23:02.014 "method": "bdev_nvme_attach_controller" 00:23:02.014 },{ 00:23:02.014 "params": { 00:23:02.014 "name": "Nvme7", 00:23:02.014 "trtype": "tcp", 00:23:02.014 "traddr": "10.0.0.2", 00:23:02.014 "adrfam": "ipv4", 00:23:02.014 "trsvcid": "4420", 00:23:02.014 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:02.014 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:02.014 "hdgst": false, 00:23:02.014 "ddgst": false 00:23:02.014 }, 00:23:02.014 "method": "bdev_nvme_attach_controller" 00:23:02.014 },{ 00:23:02.014 "params": { 00:23:02.014 "name": "Nvme8", 00:23:02.014 "trtype": "tcp", 00:23:02.014 "traddr": "10.0.0.2", 00:23:02.014 "adrfam": "ipv4", 00:23:02.014 "trsvcid": "4420", 00:23:02.014 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:02.014 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:02.014 "hdgst": false, 00:23:02.014 "ddgst": false 00:23:02.014 }, 00:23:02.014 "method": "bdev_nvme_attach_controller" 00:23:02.014 },{ 00:23:02.014 "params": { 00:23:02.014 "name": "Nvme9", 00:23:02.014 "trtype": "tcp", 00:23:02.014 "traddr": "10.0.0.2", 00:23:02.014 "adrfam": "ipv4", 00:23:02.014 "trsvcid": "4420", 00:23:02.014 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:02.014 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:02.014 "hdgst": false, 00:23:02.014 "ddgst": false 00:23:02.014 }, 00:23:02.014 "method": "bdev_nvme_attach_controller" 00:23:02.014 },{ 00:23:02.014 "params": { 00:23:02.014 "name": "Nvme10", 00:23:02.014 "trtype": "tcp", 00:23:02.014 "traddr": "10.0.0.2", 00:23:02.014 "adrfam": "ipv4", 00:23:02.014 "trsvcid": "4420", 00:23:02.014 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:02.014 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:02.014 "hdgst": false, 00:23:02.014 "ddgst": false 00:23:02.014 }, 00:23:02.014 "method": "bdev_nvme_attach_controller" 00:23:02.014 }' 00:23:02.014 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.014 [2024-07-15 13:53:28.342248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.014 [2024-07-15 13:53:28.406845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.395 13:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.395 13:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:03.395 13:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:03.395 13:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.395 13:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:03.395 13:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.395 13:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1161550 00:23:03.395 13:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:03.395 13:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:04.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1161550 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1161174 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.777 { 00:23:04.777 "params": { 00:23:04.777 "name": "Nvme$subsystem", 00:23:04.777 "trtype": "$TEST_TRANSPORT", 00:23:04.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.777 "adrfam": "ipv4", 00:23:04.777 "trsvcid": "$NVMF_PORT", 00:23:04.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.777 "hdgst": ${hdgst:-false}, 00:23:04.777 "ddgst": ${ddgst:-false} 00:23:04.777 }, 00:23:04.777 "method": "bdev_nvme_attach_controller" 00:23:04.777 } 00:23:04.777 EOF 00:23:04.777 )") 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.777 { 00:23:04.777 "params": { 00:23:04.777 "name": "Nvme$subsystem", 00:23:04.777 "trtype": "$TEST_TRANSPORT", 00:23:04.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.777 "adrfam": "ipv4", 00:23:04.777 "trsvcid": "$NVMF_PORT", 00:23:04.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.777 "hdgst": ${hdgst:-false}, 00:23:04.777 "ddgst": ${ddgst:-false} 00:23:04.777 }, 00:23:04.777 "method": "bdev_nvme_attach_controller" 00:23:04.777 } 00:23:04.777 EOF 00:23:04.777 )") 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.777 { 00:23:04.777 "params": { 00:23:04.777 "name": "Nvme$subsystem", 00:23:04.777 "trtype": "$TEST_TRANSPORT", 00:23:04.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.777 "adrfam": "ipv4", 00:23:04.777 "trsvcid": "$NVMF_PORT", 00:23:04.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.777 "hdgst": ${hdgst:-false}, 00:23:04.777 "ddgst": ${ddgst:-false} 00:23:04.777 }, 00:23:04.777 "method": "bdev_nvme_attach_controller" 00:23:04.777 } 00:23:04.777 EOF 00:23:04.777 )") 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.777 { 00:23:04.777 "params": { 00:23:04.777 "name": "Nvme$subsystem", 00:23:04.777 "trtype": "$TEST_TRANSPORT", 00:23:04.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.777 "adrfam": "ipv4", 00:23:04.777 "trsvcid": "$NVMF_PORT", 00:23:04.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.777 "hdgst": ${hdgst:-false}, 00:23:04.777 "ddgst": ${ddgst:-false} 00:23:04.777 }, 00:23:04.777 "method": "bdev_nvme_attach_controller" 00:23:04.777 } 00:23:04.777 EOF 00:23:04.777 )") 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.777 { 00:23:04.777 "params": { 00:23:04.777 "name": "Nvme$subsystem", 00:23:04.777 "trtype": "$TEST_TRANSPORT", 00:23:04.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.777 "adrfam": "ipv4", 00:23:04.777 "trsvcid": "$NVMF_PORT", 00:23:04.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.777 "hdgst": ${hdgst:-false}, 00:23:04.777 "ddgst": ${ddgst:-false} 00:23:04.777 }, 00:23:04.777 "method": "bdev_nvme_attach_controller" 00:23:04.777 } 00:23:04.777 EOF 00:23:04.777 )") 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.777 { 00:23:04.777 "params": { 00:23:04.777 "name": "Nvme$subsystem", 00:23:04.777 "trtype": "$TEST_TRANSPORT", 00:23:04.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.777 "adrfam": "ipv4", 00:23:04.777 "trsvcid": "$NVMF_PORT", 00:23:04.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.777 "hdgst": ${hdgst:-false}, 00:23:04.777 "ddgst": ${ddgst:-false} 00:23:04.777 }, 00:23:04.777 "method": "bdev_nvme_attach_controller" 00:23:04.777 } 00:23:04.777 EOF 00:23:04.777 )") 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.777 [2024-07-15 13:53:30.957278] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:04.777 [2024-07-15 13:53:30.957328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162182 ] 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.777 { 00:23:04.777 "params": { 00:23:04.777 "name": "Nvme$subsystem", 00:23:04.777 "trtype": "$TEST_TRANSPORT", 00:23:04.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.777 "adrfam": "ipv4", 00:23:04.777 "trsvcid": "$NVMF_PORT", 00:23:04.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.777 "hdgst": ${hdgst:-false}, 00:23:04.777 "ddgst": ${ddgst:-false} 00:23:04.777 }, 00:23:04.777 "method": "bdev_nvme_attach_controller" 00:23:04.777 } 00:23:04.777 EOF 00:23:04.777 )") 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.777 { 00:23:04.777 "params": { 00:23:04.777 "name": "Nvme$subsystem", 00:23:04.777 "trtype": "$TEST_TRANSPORT", 00:23:04.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.777 "adrfam": "ipv4", 00:23:04.777 "trsvcid": "$NVMF_PORT", 00:23:04.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.777 "hdgst": ${hdgst:-false}, 00:23:04.777 "ddgst": ${ddgst:-false} 00:23:04.777 }, 00:23:04.777 "method": "bdev_nvme_attach_controller" 00:23:04.777 } 00:23:04.777 EOF 00:23:04.777 )") 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.777 { 00:23:04.777 "params": { 00:23:04.777 "name": "Nvme$subsystem", 00:23:04.777 "trtype": "$TEST_TRANSPORT", 00:23:04.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.777 "adrfam": "ipv4", 00:23:04.777 "trsvcid": "$NVMF_PORT", 00:23:04.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.777 "hdgst": ${hdgst:-false}, 00:23:04.777 "ddgst": ${ddgst:-false} 00:23:04.777 }, 00:23:04.777 "method": "bdev_nvme_attach_controller" 00:23:04.777 } 00:23:04.777 EOF 00:23:04.777 )") 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.777 { 00:23:04.777 "params": { 00:23:04.777 "name": "Nvme$subsystem", 00:23:04.777 "trtype": "$TEST_TRANSPORT", 00:23:04.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.777 "adrfam": "ipv4", 00:23:04.777 "trsvcid": "$NVMF_PORT", 00:23:04.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.777 "hdgst": ${hdgst:-false}, 00:23:04.777 "ddgst": ${ddgst:-false} 00:23:04.777 }, 00:23:04.777 "method": "bdev_nvme_attach_controller" 00:23:04.777 } 00:23:04.777 EOF 00:23:04.777 )") 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.777 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:04.777 13:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:04.777 "params": { 00:23:04.777 "name": "Nvme1", 00:23:04.777 "trtype": "tcp", 00:23:04.777 "traddr": "10.0.0.2", 00:23:04.777 "adrfam": "ipv4", 00:23:04.777 "trsvcid": "4420", 00:23:04.777 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.777 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:04.777 "hdgst": false, 00:23:04.777 "ddgst": false 00:23:04.778 }, 00:23:04.778 "method": "bdev_nvme_attach_controller" 00:23:04.778 },{ 00:23:04.778 "params": { 00:23:04.778 "name": "Nvme2", 00:23:04.778 "trtype": "tcp", 00:23:04.778 "traddr": "10.0.0.2", 00:23:04.778 "adrfam": "ipv4", 00:23:04.778 "trsvcid": "4420", 00:23:04.778 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:04.778 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:04.778 "hdgst": false, 00:23:04.778 "ddgst": false 00:23:04.778 }, 00:23:04.778 "method": "bdev_nvme_attach_controller" 00:23:04.778 },{ 00:23:04.778 "params": { 00:23:04.778 "name": "Nvme3", 00:23:04.778 "trtype": "tcp", 00:23:04.778 "traddr": "10.0.0.2", 00:23:04.778 "adrfam": "ipv4", 00:23:04.778 "trsvcid": "4420", 00:23:04.778 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:04.778 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:04.778 "hdgst": false, 00:23:04.778 "ddgst": false 00:23:04.778 }, 00:23:04.778 "method": "bdev_nvme_attach_controller" 00:23:04.778 },{ 00:23:04.778 "params": { 00:23:04.778 "name": "Nvme4", 00:23:04.778 "trtype": "tcp", 00:23:04.778 "traddr": "10.0.0.2", 00:23:04.778 "adrfam": "ipv4", 00:23:04.778 "trsvcid": "4420", 00:23:04.778 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:04.778 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:04.778 "hdgst": false, 00:23:04.778 "ddgst": false 00:23:04.778 }, 00:23:04.778 "method": "bdev_nvme_attach_controller" 00:23:04.778 },{ 00:23:04.778 "params": { 00:23:04.778 "name": "Nvme5", 00:23:04.778 "trtype": "tcp", 00:23:04.778 "traddr": "10.0.0.2", 00:23:04.778 "adrfam": "ipv4", 00:23:04.778 "trsvcid": "4420", 00:23:04.778 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:04.778 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:04.778 "hdgst": false, 00:23:04.778 "ddgst": false 00:23:04.778 }, 00:23:04.778 "method": "bdev_nvme_attach_controller" 00:23:04.778 },{ 00:23:04.778 "params": { 00:23:04.778 "name": "Nvme6", 00:23:04.778 "trtype": "tcp", 00:23:04.778 "traddr": "10.0.0.2", 00:23:04.778 "adrfam": "ipv4", 00:23:04.778 "trsvcid": "4420", 00:23:04.778 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:04.778 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:04.778 "hdgst": false, 00:23:04.778 "ddgst": false 00:23:04.778 }, 00:23:04.778 "method": "bdev_nvme_attach_controller" 00:23:04.778 },{ 00:23:04.778 "params": { 00:23:04.778 "name": "Nvme7", 00:23:04.778 "trtype": "tcp", 00:23:04.778 "traddr": "10.0.0.2", 00:23:04.778 "adrfam": "ipv4", 00:23:04.778 "trsvcid": "4420", 00:23:04.778 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:04.778 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:04.778 "hdgst": false, 00:23:04.778 "ddgst": false 00:23:04.778 }, 00:23:04.778 "method": "bdev_nvme_attach_controller" 00:23:04.778 },{ 00:23:04.778 "params": { 00:23:04.778 "name": "Nvme8", 00:23:04.778 "trtype": "tcp", 00:23:04.778 "traddr": "10.0.0.2", 00:23:04.778 "adrfam": "ipv4", 00:23:04.778 "trsvcid": "4420", 00:23:04.778 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:04.778 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:04.778 "hdgst": false, 00:23:04.778 "ddgst": false 00:23:04.778 }, 00:23:04.778 "method": "bdev_nvme_attach_controller" 00:23:04.778 },{ 00:23:04.778 "params": { 00:23:04.778 "name": "Nvme9", 00:23:04.778 "trtype": "tcp", 00:23:04.778 "traddr": "10.0.0.2", 00:23:04.778 "adrfam": "ipv4", 00:23:04.778 "trsvcid": "4420", 00:23:04.778 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:04.778 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:04.778 "hdgst": false, 00:23:04.778 "ddgst": false 00:23:04.778 }, 00:23:04.778 "method": "bdev_nvme_attach_controller" 00:23:04.778 },{ 00:23:04.778 "params": { 00:23:04.778 "name": "Nvme10", 00:23:04.778 "trtype": "tcp", 00:23:04.778 "traddr": "10.0.0.2", 00:23:04.778 "adrfam": "ipv4", 00:23:04.778 "trsvcid": "4420", 00:23:04.778 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:04.778 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:04.778 "hdgst": false, 00:23:04.778 "ddgst": false 00:23:04.778 }, 00:23:04.778 "method": "bdev_nvme_attach_controller" 00:23:04.778 }' 00:23:04.778 [2024-07-15 13:53:31.017088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.778 [2024-07-15 13:53:31.081863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.162 Running I/O for 1 seconds... 00:23:07.546 00:23:07.546 Latency(us) 00:23:07.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.546 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.546 Verification LBA range: start 0x0 length 0x400 00:23:07.546 Nvme1n1 : 1.06 181.16 11.32 0.00 0.00 343165.72 23483.73 305834.67 00:23:07.546 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.546 Verification LBA range: start 0x0 length 0x400 00:23:07.546 Nvme2n1 : 1.18 216.39 13.52 0.00 0.00 288099.41 23483.73 272629.76 00:23:07.546 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.546 Verification LBA range: start 0x0 length 0x400 00:23:07.546 Nvme3n1 : 1.07 299.27 18.70 0.00 0.00 202941.01 11796.48 222822.40 00:23:07.546 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.546 Verification LBA range: start 0x0 length 0x400 00:23:07.546 Nvme4n1 : 1.07 179.13 11.20 0.00 0.00 334037.05 22937.60 300591.79 00:23:07.546 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.546 Verification LBA range: start 0x0 length 0x400 00:23:07.546 Nvme5n1 : 1.22 262.53 16.41 0.00 0.00 226386.26 22719.15 213210.45 00:23:07.546 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.546 Verification LBA range: start 0x0 length 0x400 00:23:07.546 Nvme6n1 : 1.19 269.30 16.83 0.00 0.00 216225.11 21736.11 216705.71 00:23:07.546 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.546 Verification LBA range: start 0x0 length 0x400 00:23:07.546 Nvme7n1 : 1.20 266.92 16.68 0.00 0.00 214697.64 22063.79 265639.25 00:23:07.546 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.546 Verification LBA range: start 0x0 length 0x400 00:23:07.546 Nvme8n1 : 1.22 263.11 16.44 0.00 0.00 214458.88 19114.67 242920.11 00:23:07.546 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.546 Verification LBA range: start 0x0 length 0x400 00:23:07.546 Nvme9n1 : 1.21 263.73 16.48 0.00 0.00 210006.02 18568.53 248162.99 00:23:07.546 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.546 Verification LBA range: start 0x0 length 0x400 00:23:07.546 Nvme10n1 : 1.23 260.03 16.25 0.00 0.00 209840.13 17367.04 242920.11 00:23:07.546 =================================================================================================================== 00:23:07.546 Total : 2461.55 153.85 0.00 0.00 236817.36 11796.48 305834.67 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:07.546 rmmod nvme_tcp 00:23:07.546 rmmod nvme_fabrics 00:23:07.546 rmmod nvme_keyring 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1161174 ']' 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1161174 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1161174 ']' 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1161174 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1161174 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1161174' 00:23:07.546 killing process with pid 1161174 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1161174 00:23:07.546 13:53:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1161174 00:23:07.808 13:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:07.808 13:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:07.808 13:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:07.808 13:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:07.808 13:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:07.808 13:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.808 13:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.808 13:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:10.360 00:23:10.360 real 0m16.629s 00:23:10.360 user 0m34.806s 00:23:10.360 sys 0m6.417s 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.360 ************************************ 00:23:10.360 END TEST nvmf_shutdown_tc1 00:23:10.360 ************************************ 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:10.360 ************************************ 00:23:10.360 START TEST nvmf_shutdown_tc2 00:23:10.360 ************************************ 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:10.360 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:10.361 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:10.361 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:10.361 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:10.361 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:10.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:23:10.361 00:23:10.361 --- 10.0.0.2 ping statistics --- 00:23:10.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.361 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:10.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.429 ms 00:23:10.361 00:23:10.361 --- 10.0.0.1 ping statistics --- 00:23:10.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.361 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.361 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1163352 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1163352 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1163352 ']' 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:10.362 13:53:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.362 [2024-07-15 13:53:36.782961] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:10.362 [2024-07-15 13:53:36.783031] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.362 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.362 [2024-07-15 13:53:36.870582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:10.623 [2024-07-15 13:53:36.932599] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.623 [2024-07-15 13:53:36.932633] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.623 [2024-07-15 13:53:36.932639] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.623 [2024-07-15 13:53:36.932643] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.623 [2024-07-15 13:53:36.932647] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.623 [2024-07-15 13:53:36.932760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.623 [2024-07-15 13:53:36.932921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:10.623 [2024-07-15 13:53:36.933059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.623 [2024-07-15 13:53:36.933061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.194 [2024-07-15 13:53:37.614528] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.194 13:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.194 Malloc1 00:23:11.194 [2024-07-15 13:53:37.713264] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.455 Malloc2 00:23:11.455 Malloc3 00:23:11.455 Malloc4 00:23:11.455 Malloc5 00:23:11.455 Malloc6 00:23:11.455 Malloc7 00:23:11.455 Malloc8 00:23:11.716 Malloc9 00:23:11.717 Malloc10 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1163734 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1163734 /var/tmp/bdevperf.sock 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1163734 ']' 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.717 { 00:23:11.717 "params": { 00:23:11.717 "name": "Nvme$subsystem", 00:23:11.717 "trtype": "$TEST_TRANSPORT", 00:23:11.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.717 "adrfam": "ipv4", 00:23:11.717 "trsvcid": "$NVMF_PORT", 00:23:11.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.717 "hdgst": ${hdgst:-false}, 00:23:11.717 "ddgst": ${ddgst:-false} 00:23:11.717 }, 00:23:11.717 "method": "bdev_nvme_attach_controller" 00:23:11.717 } 00:23:11.717 EOF 00:23:11.717 )") 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.717 { 00:23:11.717 "params": { 00:23:11.717 "name": "Nvme$subsystem", 00:23:11.717 "trtype": "$TEST_TRANSPORT", 00:23:11.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.717 "adrfam": "ipv4", 00:23:11.717 "trsvcid": "$NVMF_PORT", 00:23:11.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.717 "hdgst": ${hdgst:-false}, 00:23:11.717 "ddgst": ${ddgst:-false} 00:23:11.717 }, 00:23:11.717 "method": "bdev_nvme_attach_controller" 00:23:11.717 } 00:23:11.717 EOF 00:23:11.717 )") 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.717 { 00:23:11.717 "params": { 00:23:11.717 "name": "Nvme$subsystem", 00:23:11.717 "trtype": "$TEST_TRANSPORT", 00:23:11.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.717 "adrfam": "ipv4", 00:23:11.717 "trsvcid": "$NVMF_PORT", 00:23:11.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.717 "hdgst": ${hdgst:-false}, 00:23:11.717 "ddgst": ${ddgst:-false} 00:23:11.717 }, 00:23:11.717 "method": "bdev_nvme_attach_controller" 00:23:11.717 } 00:23:11.717 EOF 00:23:11.717 )") 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.717 { 00:23:11.717 "params": { 00:23:11.717 "name": "Nvme$subsystem", 00:23:11.717 "trtype": "$TEST_TRANSPORT", 00:23:11.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.717 "adrfam": "ipv4", 00:23:11.717 "trsvcid": "$NVMF_PORT", 00:23:11.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.717 "hdgst": ${hdgst:-false}, 00:23:11.717 "ddgst": ${ddgst:-false} 00:23:11.717 }, 00:23:11.717 "method": "bdev_nvme_attach_controller" 00:23:11.717 } 00:23:11.717 EOF 00:23:11.717 )") 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.717 { 00:23:11.717 "params": { 00:23:11.717 "name": "Nvme$subsystem", 00:23:11.717 "trtype": "$TEST_TRANSPORT", 00:23:11.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.717 "adrfam": "ipv4", 00:23:11.717 "trsvcid": "$NVMF_PORT", 00:23:11.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.717 "hdgst": ${hdgst:-false}, 00:23:11.717 "ddgst": ${ddgst:-false} 00:23:11.717 }, 00:23:11.717 "method": "bdev_nvme_attach_controller" 00:23:11.717 } 00:23:11.717 EOF 00:23:11.717 )") 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.717 { 00:23:11.717 "params": { 00:23:11.717 "name": "Nvme$subsystem", 00:23:11.717 "trtype": "$TEST_TRANSPORT", 00:23:11.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.717 "adrfam": "ipv4", 00:23:11.717 "trsvcid": "$NVMF_PORT", 00:23:11.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.717 "hdgst": ${hdgst:-false}, 00:23:11.717 "ddgst": ${ddgst:-false} 00:23:11.717 }, 00:23:11.717 "method": "bdev_nvme_attach_controller" 00:23:11.717 } 00:23:11.717 EOF 00:23:11.717 )") 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.717 [2024-07-15 13:53:38.160259] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:11.717 [2024-07-15 13:53:38.160313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163734 ] 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.717 { 00:23:11.717 "params": { 00:23:11.717 "name": "Nvme$subsystem", 00:23:11.717 "trtype": "$TEST_TRANSPORT", 00:23:11.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.717 "adrfam": "ipv4", 00:23:11.717 "trsvcid": "$NVMF_PORT", 00:23:11.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.717 "hdgst": ${hdgst:-false}, 00:23:11.717 "ddgst": ${ddgst:-false} 00:23:11.717 }, 00:23:11.717 "method": "bdev_nvme_attach_controller" 00:23:11.717 } 00:23:11.717 EOF 00:23:11.717 )") 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.717 { 00:23:11.717 "params": { 00:23:11.717 "name": "Nvme$subsystem", 00:23:11.717 "trtype": "$TEST_TRANSPORT", 00:23:11.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.717 "adrfam": "ipv4", 00:23:11.717 "trsvcid": "$NVMF_PORT", 00:23:11.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.717 "hdgst": ${hdgst:-false}, 00:23:11.717 "ddgst": ${ddgst:-false} 00:23:11.717 }, 00:23:11.717 "method": "bdev_nvme_attach_controller" 00:23:11.717 } 00:23:11.717 EOF 00:23:11.717 )") 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.717 { 00:23:11.717 "params": { 00:23:11.717 "name": "Nvme$subsystem", 00:23:11.717 "trtype": "$TEST_TRANSPORT", 00:23:11.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.717 "adrfam": "ipv4", 00:23:11.717 "trsvcid": "$NVMF_PORT", 00:23:11.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.717 "hdgst": ${hdgst:-false}, 00:23:11.717 "ddgst": ${ddgst:-false} 00:23:11.717 }, 00:23:11.717 "method": "bdev_nvme_attach_controller" 00:23:11.717 } 00:23:11.717 EOF 00:23:11.717 )") 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.717 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.717 { 00:23:11.717 "params": { 00:23:11.717 "name": "Nvme$subsystem", 00:23:11.717 "trtype": "$TEST_TRANSPORT", 00:23:11.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.717 "adrfam": "ipv4", 00:23:11.717 "trsvcid": "$NVMF_PORT", 00:23:11.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.718 "hdgst": ${hdgst:-false}, 00:23:11.718 "ddgst": ${ddgst:-false} 00:23:11.718 }, 00:23:11.718 "method": "bdev_nvme_attach_controller" 00:23:11.718 } 00:23:11.718 EOF 00:23:11.718 )") 00:23:11.718 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.718 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.718 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:11.718 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:11.718 13:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:11.718 "params": { 00:23:11.718 "name": "Nvme1", 00:23:11.718 "trtype": "tcp", 00:23:11.718 "traddr": "10.0.0.2", 00:23:11.718 "adrfam": "ipv4", 00:23:11.718 "trsvcid": "4420", 00:23:11.718 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.718 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.718 "hdgst": false, 00:23:11.718 "ddgst": false 00:23:11.718 }, 00:23:11.718 "method": "bdev_nvme_attach_controller" 00:23:11.718 },{ 00:23:11.718 "params": { 00:23:11.718 "name": "Nvme2", 00:23:11.718 "trtype": "tcp", 00:23:11.718 "traddr": "10.0.0.2", 00:23:11.718 "adrfam": "ipv4", 00:23:11.718 "trsvcid": "4420", 00:23:11.718 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:11.718 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:11.718 "hdgst": false, 00:23:11.718 "ddgst": false 00:23:11.718 }, 00:23:11.718 "method": "bdev_nvme_attach_controller" 00:23:11.718 },{ 00:23:11.718 "params": { 00:23:11.718 "name": "Nvme3", 00:23:11.718 "trtype": "tcp", 00:23:11.718 "traddr": "10.0.0.2", 00:23:11.718 "adrfam": "ipv4", 00:23:11.718 "trsvcid": "4420", 00:23:11.718 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:11.718 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:11.718 "hdgst": false, 00:23:11.718 "ddgst": false 00:23:11.718 }, 00:23:11.718 "method": "bdev_nvme_attach_controller" 00:23:11.718 },{ 00:23:11.718 "params": { 00:23:11.718 "name": "Nvme4", 00:23:11.718 "trtype": "tcp", 00:23:11.718 "traddr": "10.0.0.2", 00:23:11.718 "adrfam": "ipv4", 00:23:11.718 "trsvcid": "4420", 00:23:11.718 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:11.718 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:11.718 "hdgst": false, 00:23:11.718 "ddgst": false 00:23:11.718 }, 00:23:11.718 "method": "bdev_nvme_attach_controller" 00:23:11.718 },{ 00:23:11.718 "params": { 00:23:11.718 "name": "Nvme5", 00:23:11.718 "trtype": "tcp", 00:23:11.718 "traddr": "10.0.0.2", 00:23:11.718 "adrfam": "ipv4", 00:23:11.718 "trsvcid": "4420", 00:23:11.718 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:11.718 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:11.718 "hdgst": false, 00:23:11.718 "ddgst": false 00:23:11.718 }, 00:23:11.718 "method": "bdev_nvme_attach_controller" 00:23:11.718 },{ 00:23:11.718 "params": { 00:23:11.718 "name": "Nvme6", 00:23:11.718 "trtype": "tcp", 00:23:11.718 "traddr": "10.0.0.2", 00:23:11.718 "adrfam": "ipv4", 00:23:11.718 "trsvcid": "4420", 00:23:11.718 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:11.718 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:11.718 "hdgst": false, 00:23:11.718 "ddgst": false 00:23:11.718 }, 00:23:11.718 "method": "bdev_nvme_attach_controller" 00:23:11.718 },{ 00:23:11.718 "params": { 00:23:11.718 "name": "Nvme7", 00:23:11.718 "trtype": "tcp", 00:23:11.718 "traddr": "10.0.0.2", 00:23:11.718 "adrfam": "ipv4", 00:23:11.718 "trsvcid": "4420", 00:23:11.718 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:11.718 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:11.718 "hdgst": false, 00:23:11.718 "ddgst": false 00:23:11.718 }, 00:23:11.718 "method": "bdev_nvme_attach_controller" 00:23:11.718 },{ 00:23:11.718 "params": { 00:23:11.718 "name": "Nvme8", 00:23:11.718 "trtype": "tcp", 00:23:11.718 "traddr": "10.0.0.2", 00:23:11.718 "adrfam": "ipv4", 00:23:11.718 "trsvcid": "4420", 00:23:11.718 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:11.718 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:11.718 "hdgst": false, 00:23:11.718 "ddgst": false 00:23:11.718 }, 00:23:11.718 "method": "bdev_nvme_attach_controller" 00:23:11.718 },{ 00:23:11.718 "params": { 00:23:11.718 "name": "Nvme9", 00:23:11.718 "trtype": "tcp", 00:23:11.718 "traddr": "10.0.0.2", 00:23:11.718 "adrfam": "ipv4", 00:23:11.718 "trsvcid": "4420", 00:23:11.718 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:11.718 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:11.718 "hdgst": false, 00:23:11.718 "ddgst": false 00:23:11.718 }, 00:23:11.718 "method": "bdev_nvme_attach_controller" 00:23:11.718 },{ 00:23:11.718 "params": { 00:23:11.718 "name": "Nvme10", 00:23:11.718 "trtype": "tcp", 00:23:11.718 "traddr": "10.0.0.2", 00:23:11.718 "adrfam": "ipv4", 00:23:11.718 "trsvcid": "4420", 00:23:11.718 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:11.718 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:11.718 "hdgst": false, 00:23:11.718 "ddgst": false 00:23:11.718 }, 00:23:11.718 "method": "bdev_nvme_attach_controller" 00:23:11.718 }' 00:23:11.718 [2024-07-15 13:53:38.219855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.978 [2024-07-15 13:53:38.284203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.365 Running I/O for 10 seconds... 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:13.365 13:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:13.672 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:13.672 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:13.672 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:13.672 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:13.672 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.672 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.672 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.672 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:13.672 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:13.672 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:13.931 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:13.931 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:13.931 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:13.931 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:13.931 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.931 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.931 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.192 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:14.192 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:14.192 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:14.192 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:14.192 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:14.192 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1163734 00:23:14.192 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1163734 ']' 00:23:14.192 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1163734 00:23:14.192 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:14.192 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:14.192 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1163734 00:23:14.192 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:14.192 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:14.192 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1163734' 00:23:14.192 killing process with pid 1163734 00:23:14.192 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1163734 00:23:14.192 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1163734 00:23:14.192 Received shutdown signal, test time was about 0.958768 seconds 00:23:14.192 00:23:14.192 Latency(us) 00:23:14.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.192 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.192 Verification LBA range: start 0x0 length 0x400 00:23:14.192 Nvme1n1 : 0.92 208.85 13.05 0.00 0.00 302508.37 22500.69 249910.61 00:23:14.192 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.192 Verification LBA range: start 0x0 length 0x400 00:23:14.192 Nvme2n1 : 0.94 205.21 12.83 0.00 0.00 301710.51 21189.97 255153.49 00:23:14.192 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.192 Verification LBA range: start 0x0 length 0x400 00:23:14.192 Nvme3n1 : 0.94 272.72 17.04 0.00 0.00 222124.37 20643.84 248162.99 00:23:14.192 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.192 Verification LBA range: start 0x0 length 0x400 00:23:14.192 Nvme4n1 : 0.93 207.24 12.95 0.00 0.00 285678.93 23046.83 248162.99 00:23:14.192 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.192 Verification LBA range: start 0x0 length 0x400 00:23:14.192 Nvme5n1 : 0.95 202.31 12.64 0.00 0.00 286787.41 22937.60 300591.79 00:23:14.192 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.192 Verification LBA range: start 0x0 length 0x400 00:23:14.192 Nvme6n1 : 0.94 271.70 16.98 0.00 0.00 208482.35 19660.80 255153.49 00:23:14.192 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.192 Verification LBA range: start 0x0 length 0x400 00:23:14.192 Nvme7n1 : 0.95 268.50 16.78 0.00 0.00 206407.25 20862.29 223696.21 00:23:14.192 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.192 Verification LBA range: start 0x0 length 0x400 00:23:14.192 Nvme8n1 : 0.96 265.17 16.57 0.00 0.00 203979.51 20316.16 244667.73 00:23:14.192 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.192 Verification LBA range: start 0x0 length 0x400 00:23:14.192 Nvme9n1 : 0.93 206.32 12.89 0.00 0.00 254871.32 20316.16 242920.11 00:23:14.192 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.192 Verification LBA range: start 0x0 length 0x400 00:23:14.192 Nvme10n1 : 0.95 270.81 16.93 0.00 0.00 189919.36 20206.93 248162.99 00:23:14.192 =================================================================================================================== 00:23:14.192 Total : 2378.82 148.68 0.00 0.00 240556.12 19660.80 300591.79 00:23:14.452 13:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1163352 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:15.393 rmmod nvme_tcp 00:23:15.393 rmmod nvme_fabrics 00:23:15.393 rmmod nvme_keyring 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1163352 ']' 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1163352 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1163352 ']' 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1163352 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1163352 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1163352' 00:23:15.393 killing process with pid 1163352 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1163352 00:23:15.393 13:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1163352 00:23:15.653 13:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:15.653 13:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:15.653 13:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:15.653 13:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:15.653 13:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:15.653 13:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.653 13:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:15.653 13:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:18.195 00:23:18.195 real 0m7.816s 00:23:18.195 user 0m23.266s 00:23:18.195 sys 0m1.280s 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.195 ************************************ 00:23:18.195 END TEST nvmf_shutdown_tc2 00:23:18.195 ************************************ 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:18.195 ************************************ 00:23:18.195 START TEST nvmf_shutdown_tc3 00:23:18.195 ************************************ 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.195 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:18.196 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:18.196 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:18.196 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:18.196 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:18.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:23:18.196 00:23:18.196 --- 10.0.0.2 ping statistics --- 00:23:18.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.196 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:23:18.196 00:23:18.196 --- 10.0.0.1 ping statistics --- 00:23:18.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.196 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1165007 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1165007 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1165007 ']' 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:18.196 13:53:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.196 [2024-07-15 13:53:44.696926] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:18.196 [2024-07-15 13:53:44.696999] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.456 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.456 [2024-07-15 13:53:44.784743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:18.456 [2024-07-15 13:53:44.846530] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.456 [2024-07-15 13:53:44.846563] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.456 [2024-07-15 13:53:44.846568] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.456 [2024-07-15 13:53:44.846573] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.456 [2024-07-15 13:53:44.846577] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.456 [2024-07-15 13:53:44.846691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.456 [2024-07-15 13:53:44.846855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.456 [2024-07-15 13:53:44.847016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.456 [2024-07-15 13:53:44.847018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:19.025 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:19.025 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:19.026 [2024-07-15 13:53:45.501518] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.026 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:19.285 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.285 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:19.285 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.285 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:19.285 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:19.285 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.285 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:19.285 Malloc1 00:23:19.285 [2024-07-15 13:53:45.600291] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.285 Malloc2 00:23:19.285 Malloc3 00:23:19.285 Malloc4 00:23:19.285 Malloc5 00:23:19.285 Malloc6 00:23:19.285 Malloc7 00:23:19.545 Malloc8 00:23:19.545 Malloc9 00:23:19.545 Malloc10 00:23:19.545 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.545 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:19.545 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:19.545 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:19.545 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1165262 00:23:19.545 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1165262 /var/tmp/bdevperf.sock 00:23:19.545 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1165262 ']' 00:23:19.545 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.545 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.545 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.545 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:19.545 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.545 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:19.545 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:19.545 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:19.545 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:19.545 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.545 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.545 { 00:23:19.545 "params": { 00:23:19.545 "name": "Nvme$subsystem", 00:23:19.545 "trtype": "$TEST_TRANSPORT", 00:23:19.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.545 "adrfam": "ipv4", 00:23:19.545 "trsvcid": "$NVMF_PORT", 00:23:19.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.545 "hdgst": ${hdgst:-false}, 00:23:19.545 "ddgst": ${ddgst:-false} 00:23:19.545 }, 00:23:19.545 "method": "bdev_nvme_attach_controller" 00:23:19.545 } 00:23:19.545 EOF 00:23:19.545 )") 00:23:19.545 13:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.545 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.545 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.545 { 00:23:19.545 "params": { 00:23:19.545 "name": "Nvme$subsystem", 00:23:19.545 "trtype": "$TEST_TRANSPORT", 00:23:19.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.545 "adrfam": "ipv4", 00:23:19.545 "trsvcid": "$NVMF_PORT", 00:23:19.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.545 "hdgst": ${hdgst:-false}, 00:23:19.545 "ddgst": ${ddgst:-false} 00:23:19.545 }, 00:23:19.545 "method": "bdev_nvme_attach_controller" 00:23:19.545 } 00:23:19.545 EOF 00:23:19.545 )") 00:23:19.545 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.545 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.545 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.545 { 00:23:19.545 "params": { 00:23:19.545 "name": "Nvme$subsystem", 00:23:19.545 "trtype": "$TEST_TRANSPORT", 00:23:19.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.545 "adrfam": "ipv4", 00:23:19.545 "trsvcid": "$NVMF_PORT", 00:23:19.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.545 "hdgst": ${hdgst:-false}, 00:23:19.545 "ddgst": ${ddgst:-false} 00:23:19.545 }, 00:23:19.545 "method": "bdev_nvme_attach_controller" 00:23:19.545 } 00:23:19.545 EOF 00:23:19.545 )") 00:23:19.545 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.545 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.545 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.545 { 00:23:19.545 "params": { 00:23:19.545 "name": "Nvme$subsystem", 00:23:19.545 "trtype": "$TEST_TRANSPORT", 00:23:19.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.545 "adrfam": "ipv4", 00:23:19.545 "trsvcid": "$NVMF_PORT", 00:23:19.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.545 "hdgst": ${hdgst:-false}, 00:23:19.545 "ddgst": ${ddgst:-false} 00:23:19.545 }, 00:23:19.545 "method": "bdev_nvme_attach_controller" 00:23:19.545 } 00:23:19.545 EOF 00:23:19.545 )") 00:23:19.545 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.545 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.545 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.545 { 00:23:19.545 "params": { 00:23:19.545 "name": "Nvme$subsystem", 00:23:19.545 "trtype": "$TEST_TRANSPORT", 00:23:19.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.545 "adrfam": "ipv4", 00:23:19.545 "trsvcid": "$NVMF_PORT", 00:23:19.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.545 "hdgst": ${hdgst:-false}, 00:23:19.545 "ddgst": ${ddgst:-false} 00:23:19.545 }, 00:23:19.545 "method": "bdev_nvme_attach_controller" 00:23:19.545 } 00:23:19.545 EOF 00:23:19.545 )") 00:23:19.545 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.545 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.545 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.545 { 00:23:19.545 "params": { 00:23:19.545 "name": "Nvme$subsystem", 00:23:19.545 "trtype": "$TEST_TRANSPORT", 00:23:19.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.545 "adrfam": "ipv4", 00:23:19.545 "trsvcid": "$NVMF_PORT", 00:23:19.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.545 "hdgst": ${hdgst:-false}, 00:23:19.545 "ddgst": ${ddgst:-false} 00:23:19.545 }, 00:23:19.545 "method": "bdev_nvme_attach_controller" 00:23:19.546 } 00:23:19.546 EOF 00:23:19.546 )") 00:23:19.546 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.546 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.546 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.546 { 00:23:19.546 "params": { 00:23:19.546 "name": "Nvme$subsystem", 00:23:19.546 "trtype": "$TEST_TRANSPORT", 00:23:19.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.546 "adrfam": "ipv4", 00:23:19.546 "trsvcid": "$NVMF_PORT", 00:23:19.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.546 "hdgst": ${hdgst:-false}, 00:23:19.546 "ddgst": ${ddgst:-false} 00:23:19.546 }, 00:23:19.546 "method": "bdev_nvme_attach_controller" 00:23:19.546 } 00:23:19.546 EOF 00:23:19.546 )") 00:23:19.546 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.546 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.546 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.546 { 00:23:19.546 "params": { 00:23:19.546 "name": "Nvme$subsystem", 00:23:19.546 "trtype": "$TEST_TRANSPORT", 00:23:19.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.546 "adrfam": "ipv4", 00:23:19.546 "trsvcid": "$NVMF_PORT", 00:23:19.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.546 "hdgst": ${hdgst:-false}, 00:23:19.546 "ddgst": ${ddgst:-false} 00:23:19.546 }, 00:23:19.546 "method": "bdev_nvme_attach_controller" 00:23:19.546 } 00:23:19.546 EOF 00:23:19.546 )") 00:23:19.546 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.546 [2024-07-15 13:53:46.049692] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:19.546 [2024-07-15 13:53:46.049748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165262 ] 00:23:19.546 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.546 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.546 { 00:23:19.546 "params": { 00:23:19.546 "name": "Nvme$subsystem", 00:23:19.546 "trtype": "$TEST_TRANSPORT", 00:23:19.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.546 "adrfam": "ipv4", 00:23:19.546 "trsvcid": "$NVMF_PORT", 00:23:19.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.546 "hdgst": ${hdgst:-false}, 00:23:19.546 "ddgst": ${ddgst:-false} 00:23:19.546 }, 00:23:19.546 "method": "bdev_nvme_attach_controller" 00:23:19.546 } 00:23:19.546 EOF 00:23:19.546 )") 00:23:19.546 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.546 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.546 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.546 { 00:23:19.546 "params": { 00:23:19.546 "name": "Nvme$subsystem", 00:23:19.546 "trtype": "$TEST_TRANSPORT", 00:23:19.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.546 "adrfam": "ipv4", 00:23:19.546 "trsvcid": "$NVMF_PORT", 00:23:19.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.546 "hdgst": ${hdgst:-false}, 00:23:19.546 "ddgst": ${ddgst:-false} 00:23:19.546 }, 00:23:19.546 "method": "bdev_nvme_attach_controller" 00:23:19.546 } 00:23:19.546 EOF 00:23:19.546 )") 00:23:19.546 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.546 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:19.806 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:19.806 13:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:19.806 "params": { 00:23:19.806 "name": "Nvme1", 00:23:19.806 "trtype": "tcp", 00:23:19.806 "traddr": "10.0.0.2", 00:23:19.806 "adrfam": "ipv4", 00:23:19.806 "trsvcid": "4420", 00:23:19.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.806 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:19.806 "hdgst": false, 00:23:19.806 "ddgst": false 00:23:19.806 }, 00:23:19.806 "method": "bdev_nvme_attach_controller" 00:23:19.806 },{ 00:23:19.806 "params": { 00:23:19.806 "name": "Nvme2", 00:23:19.806 "trtype": "tcp", 00:23:19.806 "traddr": "10.0.0.2", 00:23:19.806 "adrfam": "ipv4", 00:23:19.806 "trsvcid": "4420", 00:23:19.806 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:19.806 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:19.806 "hdgst": false, 00:23:19.806 "ddgst": false 00:23:19.806 }, 00:23:19.806 "method": "bdev_nvme_attach_controller" 00:23:19.806 },{ 00:23:19.806 "params": { 00:23:19.806 "name": "Nvme3", 00:23:19.806 "trtype": "tcp", 00:23:19.806 "traddr": "10.0.0.2", 00:23:19.806 "adrfam": "ipv4", 00:23:19.806 "trsvcid": "4420", 00:23:19.806 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:19.806 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:19.806 "hdgst": false, 00:23:19.806 "ddgst": false 00:23:19.806 }, 00:23:19.806 "method": "bdev_nvme_attach_controller" 00:23:19.806 },{ 00:23:19.806 "params": { 00:23:19.806 "name": "Nvme4", 00:23:19.806 "trtype": "tcp", 00:23:19.806 "traddr": "10.0.0.2", 00:23:19.806 "adrfam": "ipv4", 00:23:19.806 "trsvcid": "4420", 00:23:19.806 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:19.806 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:19.806 "hdgst": false, 00:23:19.806 "ddgst": false 00:23:19.806 }, 00:23:19.806 "method": "bdev_nvme_attach_controller" 00:23:19.806 },{ 00:23:19.806 "params": { 00:23:19.806 "name": "Nvme5", 00:23:19.806 "trtype": "tcp", 00:23:19.806 "traddr": "10.0.0.2", 00:23:19.806 "adrfam": "ipv4", 00:23:19.806 "trsvcid": "4420", 00:23:19.806 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:19.806 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:19.806 "hdgst": false, 00:23:19.806 "ddgst": false 00:23:19.806 }, 00:23:19.806 "method": "bdev_nvme_attach_controller" 00:23:19.806 },{ 00:23:19.806 "params": { 00:23:19.806 "name": "Nvme6", 00:23:19.806 "trtype": "tcp", 00:23:19.806 "traddr": "10.0.0.2", 00:23:19.806 "adrfam": "ipv4", 00:23:19.806 "trsvcid": "4420", 00:23:19.806 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:19.806 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:19.806 "hdgst": false, 00:23:19.806 "ddgst": false 00:23:19.806 }, 00:23:19.806 "method": "bdev_nvme_attach_controller" 00:23:19.806 },{ 00:23:19.806 "params": { 00:23:19.806 "name": "Nvme7", 00:23:19.806 "trtype": "tcp", 00:23:19.806 "traddr": "10.0.0.2", 00:23:19.806 "adrfam": "ipv4", 00:23:19.806 "trsvcid": "4420", 00:23:19.806 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:19.806 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:19.806 "hdgst": false, 00:23:19.806 "ddgst": false 00:23:19.806 }, 00:23:19.806 "method": "bdev_nvme_attach_controller" 00:23:19.806 },{ 00:23:19.806 "params": { 00:23:19.806 "name": "Nvme8", 00:23:19.806 "trtype": "tcp", 00:23:19.806 "traddr": "10.0.0.2", 00:23:19.806 "adrfam": "ipv4", 00:23:19.806 "trsvcid": "4420", 00:23:19.806 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:19.806 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:19.806 "hdgst": false, 00:23:19.806 "ddgst": false 00:23:19.806 }, 00:23:19.806 "method": "bdev_nvme_attach_controller" 00:23:19.806 },{ 00:23:19.806 "params": { 00:23:19.806 "name": "Nvme9", 00:23:19.806 "trtype": "tcp", 00:23:19.806 "traddr": "10.0.0.2", 00:23:19.806 "adrfam": "ipv4", 00:23:19.806 "trsvcid": "4420", 00:23:19.806 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:19.807 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:19.807 "hdgst": false, 00:23:19.807 "ddgst": false 00:23:19.807 }, 00:23:19.807 "method": "bdev_nvme_attach_controller" 00:23:19.807 },{ 00:23:19.807 "params": { 00:23:19.807 "name": "Nvme10", 00:23:19.807 "trtype": "tcp", 00:23:19.807 "traddr": "10.0.0.2", 00:23:19.807 "adrfam": "ipv4", 00:23:19.807 "trsvcid": "4420", 00:23:19.807 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:19.807 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:19.807 "hdgst": false, 00:23:19.807 "ddgst": false 00:23:19.807 }, 00:23:19.807 "method": "bdev_nvme_attach_controller" 00:23:19.807 }' 00:23:19.807 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.807 [2024-07-15 13:53:46.109745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.807 [2024-07-15 13:53:46.174364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.203 Running I/O for 10 seconds... 00:23:21.203 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:21.203 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:21.203 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:21.203 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.203 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.463 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.463 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:21.463 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:21.463 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:21.463 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:21.463 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:21.463 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:21.463 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:21.463 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:21.463 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:21.463 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:21.463 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.463 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.463 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.463 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:21.463 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:21.463 13:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:21.724 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:21.724 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:21.724 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:21.724 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:21.724 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.724 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.724 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.724 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:21.724 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:21.724 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:21.986 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:21.986 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:21.986 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:21.986 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:21.986 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.986 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.986 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.986 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:21.986 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:21.986 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:21.986 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:21.986 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:21.986 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1165007 00:23:21.986 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1165007 ']' 00:23:21.986 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1165007 00:23:21.986 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:23:22.263 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:22.263 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1165007 00:23:22.263 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:22.263 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:22.263 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1165007' 00:23:22.263 killing process with pid 1165007 00:23:22.263 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1165007 00:23:22.263 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1165007 00:23:22.263 [2024-07-15 13:53:48.563286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.563614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33260 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.565495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.565519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.565524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.565529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.565535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.565539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.263 [2024-07-15 13:53:48.565544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.565809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc040 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.264 [2024-07-15 13:53:48.566930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.566935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.566939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.566943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.566948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.566952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.566957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.566961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.566966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.566970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.566975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.566979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.566984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.566988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.566993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.566997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc4e0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.567869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fc9a0 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.568526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.568540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.265 [2024-07-15 13:53:48.568545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.568823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fce40 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.266 [2024-07-15 13:53:48.569712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.569854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.267 [2024-07-15 13:53:48.570957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.268 [2024-07-15 13:53:48.570962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.268 [2024-07-15 13:53:48.570967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.268 [2024-07-15 13:53:48.570971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.268 [2024-07-15 13:53:48.570975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.268 [2024-07-15 13:53:48.570980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.268 [2024-07-15 13:53:48.570984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.268 [2024-07-15 13:53:48.570989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.268 [2024-07-15 13:53:48.571767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.571801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.571812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.571820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.571828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.571835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.571843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.571855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.571862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a0290 is same with the state(5) to be set 00:23:22.268 [2024-07-15 13:53:48.571894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.571902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.571910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.571917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.571925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.571931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.571939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.571946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.571953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13de5d0 is same with the state(5) to be set 00:23:22.268 [2024-07-15 13:53:48.571975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.571983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.571991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.571998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15aa210 is same with the state(5) to be set 00:23:22.268 [2024-07-15 13:53:48.572056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30340 is same with the state(5) to be set 00:23:22.268 [2024-07-15 13:53:48.572150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157be90 is same with the state(5) to be set 00:23:22.268 [2024-07-15 13:53:48.572234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141b030 is same with the state(5) to be set 00:23:22.268 [2024-07-15 13:53:48.572316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141aca0 is same with the state(5) to be set 00:23:22.268 [2024-07-15 13:53:48.572399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b2990 is same with the state(5) to be set 00:23:22.268 [2024-07-15 13:53:48.572488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.268 [2024-07-15 13:53:48.572534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.268 [2024-07-15 13:53:48.572541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.572548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159f0c0 is same with the state(5) to be set 00:23:22.269 [2024-07-15 13:53:48.572893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.572908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.572927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.572935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.572945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.572952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.572961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.572968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.572978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.572985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.572994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.269 [2024-07-15 13:53:48.573587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.269 [2024-07-15 13:53:48.573594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.573952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.573975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:22.270 [2024-07-15 13:53:48.574016] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14824b0 was disconnected and freed. reset controller. 00:23:22.270 [2024-07-15 13:53:48.574047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.574066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.574083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.574099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.574115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.574137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.574153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.574169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.574185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.574201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.574218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.574236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.574252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.574268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.574284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.574303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.574320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.574336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.574353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.270 [2024-07-15 13:53:48.574370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.270 [2024-07-15 13:53:48.574377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.574386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.574393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.574402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.574409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.574419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.574426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.574437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.574446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.574455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.574462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.574471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.574478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.574487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.574494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.574503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.574510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.574519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.574526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.574535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.574542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.574552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.574559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.574569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.574576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.574585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.574592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.574601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.574608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.580968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.271 [2024-07-15 13:53:48.580988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.271 [2024-07-15 13:53:48.580994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.271 [2024-07-15 13:53:48.581000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.271 [2024-07-15 13:53:48.581005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.271 [2024-07-15 13:53:48.581014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.271 [2024-07-15 13:53:48.581018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.271 [2024-07-15 13:53:48.581023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.271 [2024-07-15 13:53:48.581027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.271 [2024-07-15 13:53:48.581032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.271 [2024-07-15 13:53:48.581036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.271 [2024-07-15 13:53:48.581041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.271 [2024-07-15 13:53:48.581045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.271 [2024-07-15 13:53:48.581050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32da0 is same with the state(5) to be set 00:23:22.271 [2024-07-15 13:53:48.590805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.590841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.590854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.590863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.590874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.590882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.590891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.590899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.590908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.590915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.590925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.590933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.590942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.590950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.590959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.590966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.590976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.590989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.590998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.591005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.591014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.591021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.591030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.591037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.591047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.591054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.591064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.591072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.591081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.591088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.271 [2024-07-15 13:53:48.591098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.271 [2024-07-15 13:53:48.591105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591416] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15194e0 was disconnected and freed. reset controller. 00:23:22.272 [2024-07-15 13:53:48.591745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.591984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.591991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.592000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.592007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.592017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.592025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.592034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.592041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.592051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.592058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.592067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.592074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.592083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.592090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.592099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.592106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.592115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.592128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.592138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.592145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.592154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.592161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.592171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.592177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.592187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.592194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.592203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.592210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.592220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-07-15 13:53:48.592228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.272 [2024-07-15 13:53:48.592238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-07-15 13:53:48.592813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.592866] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1485770 was disconnected and freed. reset controller. 00:23:22.273 [2024-07-15 13:53:48.593008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a0290 (9): Bad file descriptor 00:23:22.273 [2024-07-15 13:53:48.593029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13de5d0 (9): Bad file descriptor 00:23:22.273 [2024-07-15 13:53:48.593045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15aa210 (9): Bad file descriptor 00:23:22.273 [2024-07-15 13:53:48.593057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30340 (9): Bad file descriptor 00:23:22.273 [2024-07-15 13:53:48.593069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x157be90 (9): Bad file descriptor 00:23:22.273 [2024-07-15 13:53:48.593086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x141b030 (9): Bad file descriptor 00:23:22.273 [2024-07-15 13:53:48.593102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x141aca0 (9): Bad file descriptor 00:23:22.273 [2024-07-15 13:53:48.593117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b2990 (9): Bad file descriptor 00:23:22.273 [2024-07-15 13:53:48.593154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.273 [2024-07-15 13:53:48.593165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.593174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.273 [2024-07-15 13:53:48.593180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.273 [2024-07-15 13:53:48.593188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.274 [2024-07-15 13:53:48.593195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.593203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.274 [2024-07-15 13:53:48.593210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.593216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14011b0 is same with the state(5) to be set 00:23:22.274 [2024-07-15 13:53:48.593234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159f0c0 (9): Bad file descriptor 00:23:22.274 [2024-07-15 13:53:48.597094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:22.274 [2024-07-15 13:53:48.597118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:22.274 [2024-07-15 13:53:48.597484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:22.274 [2024-07-15 13:53:48.597930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.274 [2024-07-15 13:53:48.597949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13de5d0 with addr=10.0.0.2, port=4420 00:23:22.274 [2024-07-15 13:53:48.597958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13de5d0 is same with the state(5) to be set 00:23:22.274 [2024-07-15 13:53:48.598475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.274 [2024-07-15 13:53:48.598514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15aa210 with addr=10.0.0.2, port=4420 00:23:22.274 [2024-07-15 13:53:48.598525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15aa210 is same with the state(5) to be set 00:23:22.274 [2024-07-15 13:53:48.599111] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:22.274 [2024-07-15 13:53:48.599161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.274 [2024-07-15 13:53:48.599754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-07-15 13:53:48.599761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.599770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.599777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.599786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.599793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.599803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.599811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.599820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.599827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.599836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.599843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.599852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.599860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.599869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.599876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.599885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.599893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.599903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.599910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.599919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.599926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.599935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.599942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.599951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.599959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.599968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.599976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.599985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.599992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.600001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.600008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.600019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.600026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.600035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.600042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.600051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.600058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.600067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.600074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.600083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.600090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.600099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.600106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.600115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.600127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.600137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.600144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.600153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.600160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.600169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.600176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.600186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.600193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.600202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.600209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.600218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.600229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.600237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d8530 is same with the state(5) to be set 00:23:22.275 [2024-07-15 13:53:48.600279] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13d8530 was disconnected and freed. reset controller. 00:23:22.275 [2024-07-15 13:53:48.600332] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:22.275 [2024-07-15 13:53:48.600372] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:22.275 [2024-07-15 13:53:48.600704] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:22.275 [2024-07-15 13:53:48.601028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.275 [2024-07-15 13:53:48.601042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a0290 with addr=10.0.0.2, port=4420 00:23:22.275 [2024-07-15 13:53:48.601050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a0290 is same with the state(5) to be set 00:23:22.275 [2024-07-15 13:53:48.601062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13de5d0 (9): Bad file descriptor 00:23:22.275 [2024-07-15 13:53:48.601072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15aa210 (9): Bad file descriptor 00:23:22.275 [2024-07-15 13:53:48.602346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.602362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.602377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.602385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.602396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.275 [2024-07-15 13:53:48.602404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.275 [2024-07-15 13:53:48.602415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.602990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.602999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.603006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.603015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.603023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.603032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.603039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.603048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.603055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.603065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.603072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.603081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.603087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.603097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.276 [2024-07-15 13:53:48.603104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.276 [2024-07-15 13:53:48.603113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.603125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.603135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.603142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.603151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.603158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.603167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.603174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.603183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.603190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.603199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.603206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.603216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.603223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.603232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.603239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.603249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.603255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.603264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.603271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.603280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.603288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.603296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.603303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.603312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.603319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.603330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.603337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.603346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.603353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.603362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.603369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.603378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.603385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.603394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.603401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.603410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.603417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.603425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d99c0 is same with the state(5) to be set 00:23:22.277 [2024-07-15 13:53:48.603463] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13d99c0 was disconnected and freed. reset controller. 00:23:22.277 [2024-07-15 13:53:48.603529] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:22.277 [2024-07-15 13:53:48.603550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:22.277 [2024-07-15 13:53:48.603580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a0290 (9): Bad file descriptor 00:23:22.277 [2024-07-15 13:53:48.603590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:22.277 [2024-07-15 13:53:48.603596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:22.277 [2024-07-15 13:53:48.603605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:22.277 [2024-07-15 13:53:48.603617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:22.277 [2024-07-15 13:53:48.603624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:22.277 [2024-07-15 13:53:48.603631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:22.277 [2024-07-15 13:53:48.603676] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:22.277 [2024-07-15 13:53:48.603696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14011b0 (9): Bad file descriptor 00:23:22.277 [2024-07-15 13:53:48.604978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.277 [2024-07-15 13:53:48.604991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.277 [2024-07-15 13:53:48.605014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:22.277 [2024-07-15 13:53:48.605445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.277 [2024-07-15 13:53:48.605484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141aca0 with addr=10.0.0.2, port=4420 00:23:22.277 [2024-07-15 13:53:48.605495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141aca0 is same with the state(5) to be set 00:23:22.277 [2024-07-15 13:53:48.605505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:22.277 [2024-07-15 13:53:48.605512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:22.277 [2024-07-15 13:53:48.605520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:22.277 [2024-07-15 13:53:48.605569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.605579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.605596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.605603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.605613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.605620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.605629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.605636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.605646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.605652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.605662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.605669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.605678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.605685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.605695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.605702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.605711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.605718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.605727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.605734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.605748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.605755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.605764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.605771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.605780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.605788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.605797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.277 [2024-07-15 13:53:48.605804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.277 [2024-07-15 13:53:48.605813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.605820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.605829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.605836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.605845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.605852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.605861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.605868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.605877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.605884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.605893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.605900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.605909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.605916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.605926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.605932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.605942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.605950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.605960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.605966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.605975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.605983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.605992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.605999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.278 [2024-07-15 13:53:48.606541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.278 [2024-07-15 13:53:48.606548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.606557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.606564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.606573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.606580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.606589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.606596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.606607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.606614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.606623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.606630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.606639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.606646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.606655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.606662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.606670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a970 is same with the state(5) to be set 00:23:22.279 [2024-07-15 13:53:48.608252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.279 [2024-07-15 13:53:48.608730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.279 [2024-07-15 13:53:48.608738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.608746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.608755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.608762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.608771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.608778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.608787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.608794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.608803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.608814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.608824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.608830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.608839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.608846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.608855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.608863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.608872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.608879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.608888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.608895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.608904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.608912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.608922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.608928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.608938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.608945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.608954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.608961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.608970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.608977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.608986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.608993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.609009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.609027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.609044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.609061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.609077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.609093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.609110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.609133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.609150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.609166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.609182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.609199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.609216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.609235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.609251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.609267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.609283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.609300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.609316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.609323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482e70 is same with the state(5) to be set 00:23:22.280 [2024-07-15 13:53:48.610600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.610612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.610625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.610634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.610645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.610654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.610665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.610673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.610684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.610693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.610702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.610709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.280 [2024-07-15 13:53:48.610718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.280 [2024-07-15 13:53:48.610728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.610738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.610745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.610754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.610761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.610770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.610777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.610786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.610793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.610803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.610809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.610819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.610826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.610835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.610842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.610851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.610859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.610868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.610875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.610884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.610891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.610900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.610907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.610916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.610923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.610934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.610941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.610951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.610958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.610967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.610974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.610984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.610991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.281 [2024-07-15 13:53:48.611412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.281 [2024-07-15 13:53:48.611419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.611428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.611435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.611444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.611451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.611460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.611467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.611477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.611485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.611494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.611501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.611510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.611517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.611526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.611533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.611542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.611551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.611560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.611567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.611576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.611584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.611593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.611600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.611609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.611616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.611625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.611632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.611641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.611648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.611657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.611664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.611672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484300 is same with the state(5) to be set 00:23:22.282 [2024-07-15 13:53:48.612953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.612967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.612980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.612989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.282 [2024-07-15 13:53:48.613394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.282 [2024-07-15 13:53:48.613403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.613991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.613998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.614007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.283 [2024-07-15 13:53:48.614014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.283 [2024-07-15 13:53:48.614022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1487f60 is same with the state(5) to be set 00:23:22.283 [2024-07-15 13:53:48.615794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.283 [2024-07-15 13:53:48.615816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:22.283 [2024-07-15 13:53:48.615828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:22.283 [2024-07-15 13:53:48.615838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:22.283 [2024-07-15 13:53:48.616295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.283 [2024-07-15 13:53:48.616310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141b030 with addr=10.0.0.2, port=4420 00:23:22.283 [2024-07-15 13:53:48.616318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141b030 is same with the state(5) to be set 00:23:22.283 [2024-07-15 13:53:48.616333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x141aca0 (9): Bad file descriptor 00:23:22.283 [2024-07-15 13:53:48.616391] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:22.283 [2024-07-15 13:53:48.616405] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:22.284 [2024-07-15 13:53:48.616417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x141b030 (9): Bad file descriptor 00:23:22.284 [2024-07-15 13:53:48.616743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:22.284 [2024-07-15 13:53:48.617198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.284 [2024-07-15 13:53:48.617211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b2990 with addr=10.0.0.2, port=4420 00:23:22.284 [2024-07-15 13:53:48.617218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b2990 is same with the state(5) to be set 00:23:22.284 [2024-07-15 13:53:48.617650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.284 [2024-07-15 13:53:48.617660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x157be90 with addr=10.0.0.2, port=4420 00:23:22.284 [2024-07-15 13:53:48.617667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157be90 is same with the state(5) to be set 00:23:22.284 [2024-07-15 13:53:48.617959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.284 [2024-07-15 13:53:48.617968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30340 with addr=10.0.0.2, port=4420 00:23:22.284 [2024-07-15 13:53:48.617975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30340 is same with the state(5) to be set 00:23:22.284 [2024-07-15 13:53:48.617983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:22.284 [2024-07-15 13:53:48.617990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:22.284 [2024-07-15 13:53:48.617997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:22.284 [2024-07-15 13:53:48.618818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.618829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.618841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.618849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.618858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.618865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.618874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.618881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.618890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.618897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.618906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.618917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.618926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.618933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.618942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.618949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.618958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.618965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.618974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.618981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.618991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.618997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.619006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.619013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.619023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.619030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.619039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.619046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.619055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.619062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.619071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.619078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.619087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.619094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.619103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.619110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.619125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.619132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.619141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.619148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.619157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.619164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.619173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.619180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.619189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.619197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.619206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.619213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.619222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.619229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.619239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.619246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.619254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.619262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.619270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.619278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.284 [2024-07-15 13:53:48.619287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.284 [2024-07-15 13:53:48.619294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.285 [2024-07-15 13:53:48.619864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.285 [2024-07-15 13:53:48.619871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486af0 is same with the state(5) to be set 00:23:22.285 [2024-07-15 13:53:48.621596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:22.285 [2024-07-15 13:53:48.621618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:22.285 [2024-07-15 13:53:48.621626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:22.285 [2024-07-15 13:53:48.621635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.285 task offset: 24576 on job bdev=Nvme1n1 fails 00:23:22.285 00:23:22.285 Latency(us) 00:23:22.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.285 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.285 Job: Nvme1n1 ended in about 0.94 seconds with error 00:23:22.285 Verification LBA range: start 0x0 length 0x400 00:23:22.285 Nvme1n1 : 0.94 204.12 12.76 68.04 0.00 232509.65 22828.37 248162.99 00:23:22.285 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.285 Job: Nvme2n1 ended in about 0.94 seconds with error 00:23:22.285 Verification LBA range: start 0x0 length 0x400 00:23:22.285 Nvme2n1 : 0.94 203.87 12.74 67.96 0.00 228061.65 21080.75 241172.48 00:23:22.285 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.285 Job: Nvme3n1 ended in about 0.95 seconds with error 00:23:22.285 Verification LBA range: start 0x0 length 0x400 00:23:22.285 Nvme3n1 : 0.95 134.16 8.39 67.08 0.00 301988.41 41069.23 276125.01 00:23:22.285 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.285 Job: Nvme4n1 ended in about 0.95 seconds with error 00:23:22.285 Verification LBA range: start 0x0 length 0x400 00:23:22.285 Nvme4n1 : 0.95 202.42 12.65 67.47 0.00 220200.75 19770.03 242920.11 00:23:22.286 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.286 Job: Nvme5n1 ended in about 0.95 seconds with error 00:23:22.286 Verification LBA range: start 0x0 length 0x400 00:23:22.286 Nvme5n1 : 0.95 201.87 12.62 67.29 0.00 216182.40 19770.03 244667.73 00:23:22.286 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.286 Job: Nvme6n1 ended in about 0.96 seconds with error 00:23:22.286 Verification LBA range: start 0x0 length 0x400 00:23:22.286 Nvme6n1 : 0.96 133.79 8.36 66.90 0.00 283822.93 22609.92 269134.51 00:23:22.286 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.286 Job: Nvme7n1 ended in about 0.96 seconds with error 00:23:22.286 Verification LBA range: start 0x0 length 0x400 00:23:22.286 Nvme7n1 : 0.96 133.47 8.34 66.73 0.00 278357.62 19114.67 246415.36 00:23:22.286 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.286 Job: Nvme8n1 ended in about 0.94 seconds with error 00:23:22.286 Verification LBA range: start 0x0 length 0x400 00:23:22.286 Nvme8n1 : 0.94 203.56 12.72 67.85 0.00 200145.49 20097.71 225443.84 00:23:22.286 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.286 Job: Nvme9n1 ended in about 0.97 seconds with error 00:23:22.286 Verification LBA range: start 0x0 length 0x400 00:23:22.286 Nvme9n1 : 0.97 198.50 12.41 66.17 0.00 201424.64 13598.72 241172.48 00:23:22.286 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.286 Job: Nvme10n1 ended in about 0.96 seconds with error 00:23:22.286 Verification LBA range: start 0x0 length 0x400 00:23:22.286 Nvme10n1 : 0.96 137.30 8.58 66.57 0.00 254988.26 22500.69 269134.51 00:23:22.286 =================================================================================================================== 00:23:22.286 Total : 1753.08 109.57 672.07 0.00 237573.84 13598.72 276125.01 00:23:22.286 [2024-07-15 13:53:48.645086] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:22.286 [2024-07-15 13:53:48.645117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:22.286 [2024-07-15 13:53:48.645624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.286 [2024-07-15 13:53:48.645640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x159f0c0 with addr=10.0.0.2, port=4420 00:23:22.286 [2024-07-15 13:53:48.645650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159f0c0 is same with the state(5) to be set 00:23:22.286 [2024-07-15 13:53:48.645662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b2990 (9): Bad file descriptor 00:23:22.286 [2024-07-15 13:53:48.645672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x157be90 (9): Bad file descriptor 00:23:22.286 [2024-07-15 13:53:48.645682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30340 (9): Bad file descriptor 00:23:22.286 [2024-07-15 13:53:48.645690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:22.286 [2024-07-15 13:53:48.645696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:22.286 [2024-07-15 13:53:48.645710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:22.286 [2024-07-15 13:53:48.645821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.286 [2024-07-15 13:53:48.646257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.286 [2024-07-15 13:53:48.646269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15aa210 with addr=10.0.0.2, port=4420 00:23:22.286 [2024-07-15 13:53:48.646277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15aa210 is same with the state(5) to be set 00:23:22.286 [2024-07-15 13:53:48.646491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.286 [2024-07-15 13:53:48.646501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13de5d0 with addr=10.0.0.2, port=4420 00:23:22.286 [2024-07-15 13:53:48.646508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13de5d0 is same with the state(5) to be set 00:23:22.286 [2024-07-15 13:53:48.646727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.286 [2024-07-15 13:53:48.646740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a0290 with addr=10.0.0.2, port=4420 00:23:22.286 [2024-07-15 13:53:48.646747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a0290 is same with the state(5) to be set 00:23:22.286 [2024-07-15 13:53:48.647195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.286 [2024-07-15 13:53:48.647205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14011b0 with addr=10.0.0.2, port=4420 00:23:22.286 [2024-07-15 13:53:48.647212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14011b0 is same with the state(5) to be set 00:23:22.286 [2024-07-15 13:53:48.647222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159f0c0 (9): Bad file descriptor 00:23:22.286 [2024-07-15 13:53:48.647231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:22.286 [2024-07-15 13:53:48.647237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:22.286 [2024-07-15 13:53:48.647244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:22.286 [2024-07-15 13:53:48.647255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:22.286 [2024-07-15 13:53:48.647261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:22.286 [2024-07-15 13:53:48.647268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:22.286 [2024-07-15 13:53:48.647278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:22.286 [2024-07-15 13:53:48.647284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:22.286 [2024-07-15 13:53:48.647290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:22.286 [2024-07-15 13:53:48.647333] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:22.286 [2024-07-15 13:53:48.647345] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:22.286 [2024-07-15 13:53:48.647354] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:22.286 [2024-07-15 13:53:48.647364] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:22.286 [2024-07-15 13:53:48.647683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.286 [2024-07-15 13:53:48.647692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.286 [2024-07-15 13:53:48.647701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.286 [2024-07-15 13:53:48.647715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15aa210 (9): Bad file descriptor 00:23:22.286 [2024-07-15 13:53:48.647724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13de5d0 (9): Bad file descriptor 00:23:22.286 [2024-07-15 13:53:48.647733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a0290 (9): Bad file descriptor 00:23:22.286 [2024-07-15 13:53:48.647742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14011b0 (9): Bad file descriptor 00:23:22.286 [2024-07-15 13:53:48.647750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:22.286 [2024-07-15 13:53:48.647756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:22.286 [2024-07-15 13:53:48.647763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:22.286 [2024-07-15 13:53:48.647806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:22.286 [2024-07-15 13:53:48.647816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:22.286 [2024-07-15 13:53:48.647824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.286 [2024-07-15 13:53:48.647842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:22.286 [2024-07-15 13:53:48.647849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:22.286 [2024-07-15 13:53:48.647855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:22.286 [2024-07-15 13:53:48.647864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:22.286 [2024-07-15 13:53:48.647871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:22.286 [2024-07-15 13:53:48.647877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:22.286 [2024-07-15 13:53:48.647886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:22.286 [2024-07-15 13:53:48.647893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:22.286 [2024-07-15 13:53:48.647899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:22.286 [2024-07-15 13:53:48.647908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:22.286 [2024-07-15 13:53:48.647915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:22.286 [2024-07-15 13:53:48.647921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:22.286 [2024-07-15 13:53:48.647953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.286 [2024-07-15 13:53:48.647961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.286 [2024-07-15 13:53:48.647967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.286 [2024-07-15 13:53:48.647973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.286 [2024-07-15 13:53:48.648381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.286 [2024-07-15 13:53:48.648391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141aca0 with addr=10.0.0.2, port=4420 00:23:22.286 [2024-07-15 13:53:48.648399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141aca0 is same with the state(5) to be set 00:23:22.286 [2024-07-15 13:53:48.648808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.286 [2024-07-15 13:53:48.648821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x141b030 with addr=10.0.0.2, port=4420 00:23:22.286 [2024-07-15 13:53:48.648828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141b030 is same with the state(5) to be set 00:23:22.286 [2024-07-15 13:53:48.648856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x141aca0 (9): Bad file descriptor 00:23:22.286 [2024-07-15 13:53:48.648865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x141b030 (9): Bad file descriptor 00:23:22.286 [2024-07-15 13:53:48.648891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:22.286 [2024-07-15 13:53:48.648898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:22.286 [2024-07-15 13:53:48.648904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:22.286 [2024-07-15 13:53:48.648913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:22.286 [2024-07-15 13:53:48.648919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:22.286 [2024-07-15 13:53:48.648926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:22.287 [2024-07-15 13:53:48.648952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.287 [2024-07-15 13:53:48.648959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.547 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:22.547 13:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1165262 00:23:23.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1165262) - No such process 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:23.489 rmmod nvme_tcp 00:23:23.489 rmmod nvme_fabrics 00:23:23.489 rmmod nvme_keyring 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:23.489 13:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.038 13:53:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:26.038 00:23:26.038 real 0m7.740s 00:23:26.038 user 0m18.737s 00:23:26.038 sys 0m1.238s 00:23:26.038 13:53:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:26.038 13:53:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.038 ************************************ 00:23:26.038 END TEST nvmf_shutdown_tc3 00:23:26.038 ************************************ 00:23:26.038 13:53:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:26.038 13:53:52 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:26.038 00:23:26.038 real 0m32.551s 00:23:26.038 user 1m16.971s 00:23:26.038 sys 0m9.162s 00:23:26.038 13:53:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:26.038 13:53:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:26.038 ************************************ 00:23:26.038 END TEST nvmf_shutdown 00:23:26.038 ************************************ 00:23:26.038 13:53:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:26.038 13:53:52 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:26.038 13:53:52 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:26.038 13:53:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:26.038 13:53:52 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:26.038 13:53:52 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:26.038 13:53:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:26.038 13:53:52 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:26.038 13:53:52 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:26.038 13:53:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:26.038 13:53:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:26.038 13:53:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:26.038 ************************************ 00:23:26.038 START TEST nvmf_multicontroller 00:23:26.038 ************************************ 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:26.039 * Looking for test storage... 00:23:26.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:26.039 13:53:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:32.676 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:32.676 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:32.676 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:32.676 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:32.676 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:32.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:23:32.937 00:23:32.937 --- 10.0.0.2 ping statistics --- 00:23:32.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.937 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:23:32.937 00:23:32.937 --- 10.0.0.1 ping statistics --- 00:23:32.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.937 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1170309 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1170309 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1170309 ']' 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:32.937 13:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.937 [2024-07-15 13:53:59.440047] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:32.937 [2024-07-15 13:53:59.440178] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.197 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.197 [2024-07-15 13:53:59.530435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:33.197 [2024-07-15 13:53:59.624582] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.197 [2024-07-15 13:53:59.624639] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.197 [2024-07-15 13:53:59.624648] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.197 [2024-07-15 13:53:59.624654] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.197 [2024-07-15 13:53:59.624660] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.197 [2024-07-15 13:53:59.624825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.198 [2024-07-15 13:53:59.624972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.198 [2024-07-15 13:53:59.624973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:33.767 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:33.767 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:33.767 13:54:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:33.767 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:33.767 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.767 13:54:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.767 13:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:33.767 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.767 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.767 [2024-07-15 13:54:00.265700] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.767 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.767 13:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:33.767 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.767 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.028 Malloc0 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.028 [2024-07-15 13:54:00.339974] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.028 [2024-07-15 13:54:00.351923] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.028 Malloc1 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1170441 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1170441 /var/tmp/bdevperf.sock 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1170441 ']' 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:34.028 13:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.971 NVMe0n1 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.971 1 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.971 request: 00:23:34.971 { 00:23:34.971 "name": "NVMe0", 00:23:34.971 "trtype": "tcp", 00:23:34.971 "traddr": "10.0.0.2", 00:23:34.971 "adrfam": "ipv4", 00:23:34.971 "trsvcid": "4420", 00:23:34.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.971 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:34.971 "hostaddr": "10.0.0.2", 00:23:34.971 "hostsvcid": "60000", 00:23:34.971 "prchk_reftag": false, 00:23:34.971 "prchk_guard": false, 00:23:34.971 "hdgst": false, 00:23:34.971 "ddgst": false, 00:23:34.971 "method": "bdev_nvme_attach_controller", 00:23:34.971 "req_id": 1 00:23:34.971 } 00:23:34.971 Got JSON-RPC error response 00:23:34.971 response: 00:23:34.971 { 00:23:34.971 "code": -114, 00:23:34.971 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:34.971 } 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.971 request: 00:23:34.971 { 00:23:34.971 "name": "NVMe0", 00:23:34.971 "trtype": "tcp", 00:23:34.971 "traddr": "10.0.0.2", 00:23:34.971 "adrfam": "ipv4", 00:23:34.971 "trsvcid": "4420", 00:23:34.971 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:34.971 "hostaddr": "10.0.0.2", 00:23:34.971 "hostsvcid": "60000", 00:23:34.971 "prchk_reftag": false, 00:23:34.971 "prchk_guard": false, 00:23:34.971 "hdgst": false, 00:23:34.971 "ddgst": false, 00:23:34.971 "method": "bdev_nvme_attach_controller", 00:23:34.971 "req_id": 1 00:23:34.971 } 00:23:34.971 Got JSON-RPC error response 00:23:34.971 response: 00:23:34.971 { 00:23:34.971 "code": -114, 00:23:34.971 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:34.971 } 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:34.971 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.972 request: 00:23:34.972 { 00:23:34.972 "name": "NVMe0", 00:23:34.972 "trtype": "tcp", 00:23:34.972 "traddr": "10.0.0.2", 00:23:34.972 "adrfam": "ipv4", 00:23:34.972 "trsvcid": "4420", 00:23:34.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.972 "hostaddr": "10.0.0.2", 00:23:34.972 "hostsvcid": "60000", 00:23:34.972 "prchk_reftag": false, 00:23:34.972 "prchk_guard": false, 00:23:34.972 "hdgst": false, 00:23:34.972 "ddgst": false, 00:23:34.972 "multipath": "disable", 00:23:34.972 "method": "bdev_nvme_attach_controller", 00:23:34.972 "req_id": 1 00:23:34.972 } 00:23:34.972 Got JSON-RPC error response 00:23:34.972 response: 00:23:34.972 { 00:23:34.972 "code": -114, 00:23:34.972 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:34.972 } 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.972 request: 00:23:34.972 { 00:23:34.972 "name": "NVMe0", 00:23:34.972 "trtype": "tcp", 00:23:34.972 "traddr": "10.0.0.2", 00:23:34.972 "adrfam": "ipv4", 00:23:34.972 "trsvcid": "4420", 00:23:34.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.972 "hostaddr": "10.0.0.2", 00:23:34.972 "hostsvcid": "60000", 00:23:34.972 "prchk_reftag": false, 00:23:34.972 "prchk_guard": false, 00:23:34.972 "hdgst": false, 00:23:34.972 "ddgst": false, 00:23:34.972 "multipath": "failover", 00:23:34.972 "method": "bdev_nvme_attach_controller", 00:23:34.972 "req_id": 1 00:23:34.972 } 00:23:34.972 Got JSON-RPC error response 00:23:34.972 response: 00:23:34.972 { 00:23:34.972 "code": -114, 00:23:34.972 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:34.972 } 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.972 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.972 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.256 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.256 13:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:35.256 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.256 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.256 00:23:35.256 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.256 13:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:35.256 13:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:35.256 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.256 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.256 13:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.256 13:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:35.256 13:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:36.226 0 00:23:36.226 13:54:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:36.226 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.226 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.226 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.226 13:54:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1170441 00:23:36.226 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1170441 ']' 00:23:36.227 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1170441 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1170441 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1170441' 00:23:36.488 killing process with pid 1170441 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1170441 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1170441 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:36.488 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:36.488 [2024-07-15 13:54:00.472215] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:36.488 [2024-07-15 13:54:00.472267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170441 ] 00:23:36.488 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.488 [2024-07-15 13:54:00.530242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.488 [2024-07-15 13:54:00.594870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.488 [2024-07-15 13:54:01.602375] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 6ee4e39e-dfd5-4b5c-8b82-85954ac1a008 already exists 00:23:36.488 [2024-07-15 13:54:01.602404] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:6ee4e39e-dfd5-4b5c-8b82-85954ac1a008 alias for bdev NVMe1n1 00:23:36.488 [2024-07-15 13:54:01.602413] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:36.488 Running I/O for 1 seconds... 00:23:36.488 00:23:36.488 Latency(us) 00:23:36.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.488 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:36.488 NVMe0n1 : 1.00 28234.14 110.29 0.00 0.00 4519.17 4150.61 13216.43 00:23:36.488 =================================================================================================================== 00:23:36.488 Total : 28234.14 110.29 0.00 0.00 4519.17 4150.61 13216.43 00:23:36.488 Received shutdown signal, test time was about 1.000000 seconds 00:23:36.488 00:23:36.488 Latency(us) 00:23:36.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.488 =================================================================================================================== 00:23:36.488 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:36.488 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:36.488 13:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:36.488 rmmod nvme_tcp 00:23:36.488 rmmod nvme_fabrics 00:23:36.748 rmmod nvme_keyring 00:23:36.748 13:54:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.748 13:54:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:36.748 13:54:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:36.748 13:54:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1170309 ']' 00:23:36.748 13:54:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1170309 00:23:36.748 13:54:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1170309 ']' 00:23:36.748 13:54:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1170309 00:23:36.748 13:54:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:36.748 13:54:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.748 13:54:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1170309 00:23:36.748 13:54:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:36.749 13:54:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:36.749 13:54:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1170309' 00:23:36.749 killing process with pid 1170309 00:23:36.749 13:54:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1170309 00:23:36.749 13:54:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1170309 00:23:36.749 13:54:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:36.749 13:54:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:36.749 13:54:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:36.749 13:54:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.749 13:54:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.749 13:54:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.749 13:54:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.749 13:54:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.292 13:54:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.292 00:23:39.292 real 0m13.161s 00:23:39.292 user 0m15.668s 00:23:39.292 sys 0m6.028s 00:23:39.292 13:54:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:39.292 13:54:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.292 ************************************ 00:23:39.292 END TEST nvmf_multicontroller 00:23:39.292 ************************************ 00:23:39.292 13:54:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:39.292 13:54:05 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:39.292 13:54:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:39.293 13:54:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:39.293 13:54:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:39.293 ************************************ 00:23:39.293 START TEST nvmf_aer 00:23:39.293 ************************************ 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:39.293 * Looking for test storage... 00:23:39.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.293 13:54:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.880 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:45.881 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:45.881 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:45.881 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:45.881 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.881 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:46.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:23:46.141 00:23:46.141 --- 10.0.0.2 ping statistics --- 00:23:46.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.141 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:23:46.141 00:23:46.141 --- 10.0.0.1 ping statistics --- 00:23:46.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.141 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:46.141 13:54:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:46.402 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1175594 00:23:46.402 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1175594 00:23:46.402 13:54:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1175594 ']' 00:23:46.402 13:54:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.402 13:54:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.402 13:54:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.402 13:54:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.402 13:54:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:46.402 13:54:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:46.402 [2024-07-15 13:54:12.722043] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:46.402 [2024-07-15 13:54:12.722102] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.402 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.402 [2024-07-15 13:54:12.793332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:46.402 [2024-07-15 13:54:12.866254] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.402 [2024-07-15 13:54:12.866291] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.402 [2024-07-15 13:54:12.866299] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.402 [2024-07-15 13:54:12.866305] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.402 [2024-07-15 13:54:12.866311] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.402 [2024-07-15 13:54:12.866921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.402 [2024-07-15 13:54:12.867005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.402 [2024-07-15 13:54:12.867218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:46.402 [2024-07-15 13:54:12.867429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.972 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:46.973 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:23:46.973 13:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:46.973 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:46.973 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.234 [2024-07-15 13:54:13.539713] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.234 Malloc0 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.234 [2024-07-15 13:54:13.599108] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.234 [ 00:23:47.234 { 00:23:47.234 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:47.234 "subtype": "Discovery", 00:23:47.234 "listen_addresses": [], 00:23:47.234 "allow_any_host": true, 00:23:47.234 "hosts": [] 00:23:47.234 }, 00:23:47.234 { 00:23:47.234 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.234 "subtype": "NVMe", 00:23:47.234 "listen_addresses": [ 00:23:47.234 { 00:23:47.234 "trtype": "TCP", 00:23:47.234 "adrfam": "IPv4", 00:23:47.234 "traddr": "10.0.0.2", 00:23:47.234 "trsvcid": "4420" 00:23:47.234 } 00:23:47.234 ], 00:23:47.234 "allow_any_host": true, 00:23:47.234 "hosts": [], 00:23:47.234 "serial_number": "SPDK00000000000001", 00:23:47.234 "model_number": "SPDK bdev Controller", 00:23:47.234 "max_namespaces": 2, 00:23:47.234 "min_cntlid": 1, 00:23:47.234 "max_cntlid": 65519, 00:23:47.234 "namespaces": [ 00:23:47.234 { 00:23:47.234 "nsid": 1, 00:23:47.234 "bdev_name": "Malloc0", 00:23:47.234 "name": "Malloc0", 00:23:47.234 "nguid": "16B9C507B83F440283464CE6BEE76BCD", 00:23:47.234 "uuid": "16b9c507-b83f-4402-8346-4ce6bee76bcd" 00:23:47.234 } 00:23:47.234 ] 00:23:47.234 } 00:23:47.234 ] 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1175935 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:47.234 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:47.234 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.495 Malloc1 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.495 Asynchronous Event Request test 00:23:47.495 Attaching to 10.0.0.2 00:23:47.495 Attached to 10.0.0.2 00:23:47.495 Registering asynchronous event callbacks... 00:23:47.495 Starting namespace attribute notice tests for all controllers... 00:23:47.495 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:47.495 aer_cb - Changed Namespace 00:23:47.495 Cleaning up... 00:23:47.495 [ 00:23:47.495 { 00:23:47.495 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:47.495 "subtype": "Discovery", 00:23:47.495 "listen_addresses": [], 00:23:47.495 "allow_any_host": true, 00:23:47.495 "hosts": [] 00:23:47.495 }, 00:23:47.495 { 00:23:47.495 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.495 "subtype": "NVMe", 00:23:47.495 "listen_addresses": [ 00:23:47.495 { 00:23:47.495 "trtype": "TCP", 00:23:47.495 "adrfam": "IPv4", 00:23:47.495 "traddr": "10.0.0.2", 00:23:47.495 "trsvcid": "4420" 00:23:47.495 } 00:23:47.495 ], 00:23:47.495 "allow_any_host": true, 00:23:47.495 "hosts": [], 00:23:47.495 "serial_number": "SPDK00000000000001", 00:23:47.495 "model_number": "SPDK bdev Controller", 00:23:47.495 "max_namespaces": 2, 00:23:47.495 "min_cntlid": 1, 00:23:47.495 "max_cntlid": 65519, 00:23:47.495 "namespaces": [ 00:23:47.495 { 00:23:47.495 "nsid": 1, 00:23:47.495 "bdev_name": "Malloc0", 00:23:47.495 "name": "Malloc0", 00:23:47.495 "nguid": "16B9C507B83F440283464CE6BEE76BCD", 00:23:47.495 "uuid": "16b9c507-b83f-4402-8346-4ce6bee76bcd" 00:23:47.495 }, 00:23:47.495 { 00:23:47.495 "nsid": 2, 00:23:47.495 "bdev_name": "Malloc1", 00:23:47.495 "name": "Malloc1", 00:23:47.495 "nguid": "5434FEE55BEE430F96A86DEE6425C78D", 00:23:47.495 "uuid": "5434fee5-5bee-430f-96a8-6dee6425c78d" 00:23:47.495 } 00:23:47.495 ] 00:23:47.495 } 00:23:47.495 ] 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1175935 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:47.495 13:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:47.495 rmmod nvme_tcp 00:23:47.495 rmmod nvme_fabrics 00:23:47.495 rmmod nvme_keyring 00:23:47.495 13:54:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:47.495 13:54:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:47.495 13:54:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:47.495 13:54:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1175594 ']' 00:23:47.495 13:54:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1175594 00:23:47.495 13:54:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1175594 ']' 00:23:47.495 13:54:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1175594 00:23:47.495 13:54:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:23:47.755 13:54:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:47.755 13:54:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1175594 00:23:47.755 13:54:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:47.755 13:54:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:47.755 13:54:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1175594' 00:23:47.755 killing process with pid 1175594 00:23:47.755 13:54:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1175594 00:23:47.755 13:54:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1175594 00:23:47.755 13:54:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:47.755 13:54:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:47.755 13:54:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:47.755 13:54:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:47.755 13:54:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:47.755 13:54:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.755 13:54:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.755 13:54:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.320 13:54:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:50.320 00:23:50.320 real 0m10.884s 00:23:50.320 user 0m7.463s 00:23:50.320 sys 0m5.676s 00:23:50.320 13:54:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:50.320 13:54:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:50.320 ************************************ 00:23:50.320 END TEST nvmf_aer 00:23:50.320 ************************************ 00:23:50.320 13:54:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:50.320 13:54:16 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:50.320 13:54:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:50.320 13:54:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:50.320 13:54:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.320 ************************************ 00:23:50.320 START TEST nvmf_async_init 00:23:50.320 ************************************ 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:50.320 * Looking for test storage... 00:23:50.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=9b1e66ac3d8449a08e86892008d75b68 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:50.320 13:54:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:56.907 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.907 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:56.907 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:56.907 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:56.907 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:56.907 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:56.907 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:56.907 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:56.907 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:56.907 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:56.907 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:56.907 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:56.907 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:56.907 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:56.907 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:56.907 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.907 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.907 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:56.908 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:56.908 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:56.908 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:56.908 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.908 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.169 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.169 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.169 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:57.169 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.169 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.169 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.169 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:57.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:23:57.169 00:23:57.169 --- 10.0.0.2 ping statistics --- 00:23:57.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.169 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:23:57.169 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:23:57.169 00:23:57.169 --- 10.0.0.1 ping statistics --- 00:23:57.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.169 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:23:57.169 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.169 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:57.169 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:57.169 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.169 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:57.169 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:57.169 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.169 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:57.169 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:57.430 13:54:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:57.430 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:57.430 13:54:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:57.430 13:54:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.430 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1180068 00:23:57.430 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1180068 00:23:57.430 13:54:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:57.430 13:54:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1180068 ']' 00:23:57.430 13:54:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.430 13:54:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.430 13:54:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.430 13:54:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.430 13:54:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.430 [2024-07-15 13:54:23.787923] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:57.430 [2024-07-15 13:54:23.787987] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.430 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.430 [2024-07-15 13:54:23.858871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.430 [2024-07-15 13:54:23.933021] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.430 [2024-07-15 13:54:23.933058] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.430 [2024-07-15 13:54:23.933066] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.430 [2024-07-15 13:54:23.933072] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.430 [2024-07-15 13:54:23.933078] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.430 [2024-07-15 13:54:23.933100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.373 [2024-07-15 13:54:24.599953] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.373 null0 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 9b1e66ac3d8449a08e86892008d75b68 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.373 [2024-07-15 13:54:24.656192] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.373 nvme0n1 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.373 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.634 [ 00:23:58.634 { 00:23:58.634 "name": "nvme0n1", 00:23:58.634 "aliases": [ 00:23:58.634 "9b1e66ac-3d84-49a0-8e86-892008d75b68" 00:23:58.634 ], 00:23:58.634 "product_name": "NVMe disk", 00:23:58.634 "block_size": 512, 00:23:58.634 "num_blocks": 2097152, 00:23:58.634 "uuid": "9b1e66ac-3d84-49a0-8e86-892008d75b68", 00:23:58.634 "assigned_rate_limits": { 00:23:58.634 "rw_ios_per_sec": 0, 00:23:58.634 "rw_mbytes_per_sec": 0, 00:23:58.634 "r_mbytes_per_sec": 0, 00:23:58.634 "w_mbytes_per_sec": 0 00:23:58.634 }, 00:23:58.634 "claimed": false, 00:23:58.634 "zoned": false, 00:23:58.634 "supported_io_types": { 00:23:58.634 "read": true, 00:23:58.634 "write": true, 00:23:58.634 "unmap": false, 00:23:58.634 "flush": true, 00:23:58.634 "reset": true, 00:23:58.634 "nvme_admin": true, 00:23:58.634 "nvme_io": true, 00:23:58.634 "nvme_io_md": false, 00:23:58.634 "write_zeroes": true, 00:23:58.634 "zcopy": false, 00:23:58.634 "get_zone_info": false, 00:23:58.634 "zone_management": false, 00:23:58.634 "zone_append": false, 00:23:58.634 "compare": true, 00:23:58.634 "compare_and_write": true, 00:23:58.634 "abort": true, 00:23:58.634 "seek_hole": false, 00:23:58.634 "seek_data": false, 00:23:58.634 "copy": true, 00:23:58.634 "nvme_iov_md": false 00:23:58.634 }, 00:23:58.634 "memory_domains": [ 00:23:58.634 { 00:23:58.634 "dma_device_id": "system", 00:23:58.634 "dma_device_type": 1 00:23:58.634 } 00:23:58.634 ], 00:23:58.634 "driver_specific": { 00:23:58.634 "nvme": [ 00:23:58.634 { 00:23:58.634 "trid": { 00:23:58.634 "trtype": "TCP", 00:23:58.634 "adrfam": "IPv4", 00:23:58.634 "traddr": "10.0.0.2", 00:23:58.634 "trsvcid": "4420", 00:23:58.634 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:58.634 }, 00:23:58.634 "ctrlr_data": { 00:23:58.634 "cntlid": 1, 00:23:58.634 "vendor_id": "0x8086", 00:23:58.634 "model_number": "SPDK bdev Controller", 00:23:58.634 "serial_number": "00000000000000000000", 00:23:58.634 "firmware_revision": "24.09", 00:23:58.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:58.634 "oacs": { 00:23:58.634 "security": 0, 00:23:58.634 "format": 0, 00:23:58.634 "firmware": 0, 00:23:58.634 "ns_manage": 0 00:23:58.634 }, 00:23:58.634 "multi_ctrlr": true, 00:23:58.634 "ana_reporting": false 00:23:58.634 }, 00:23:58.634 "vs": { 00:23:58.634 "nvme_version": "1.3" 00:23:58.634 }, 00:23:58.634 "ns_data": { 00:23:58.634 "id": 1, 00:23:58.634 "can_share": true 00:23:58.634 } 00:23:58.634 } 00:23:58.634 ], 00:23:58.634 "mp_policy": "active_passive" 00:23:58.634 } 00:23:58.634 } 00:23:58.634 ] 00:23:58.634 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.634 13:54:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:58.634 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.634 13:54:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.634 [2024-07-15 13:54:24.924973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:58.634 [2024-07-15 13:54:24.925033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223adf0 (9): Bad file descriptor 00:23:58.634 [2024-07-15 13:54:25.057220] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:58.634 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.634 13:54:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:58.634 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.634 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.634 [ 00:23:58.634 { 00:23:58.634 "name": "nvme0n1", 00:23:58.634 "aliases": [ 00:23:58.634 "9b1e66ac-3d84-49a0-8e86-892008d75b68" 00:23:58.634 ], 00:23:58.634 "product_name": "NVMe disk", 00:23:58.634 "block_size": 512, 00:23:58.634 "num_blocks": 2097152, 00:23:58.634 "uuid": "9b1e66ac-3d84-49a0-8e86-892008d75b68", 00:23:58.634 "assigned_rate_limits": { 00:23:58.634 "rw_ios_per_sec": 0, 00:23:58.634 "rw_mbytes_per_sec": 0, 00:23:58.635 "r_mbytes_per_sec": 0, 00:23:58.635 "w_mbytes_per_sec": 0 00:23:58.635 }, 00:23:58.635 "claimed": false, 00:23:58.635 "zoned": false, 00:23:58.635 "supported_io_types": { 00:23:58.635 "read": true, 00:23:58.635 "write": true, 00:23:58.635 "unmap": false, 00:23:58.635 "flush": true, 00:23:58.635 "reset": true, 00:23:58.635 "nvme_admin": true, 00:23:58.635 "nvme_io": true, 00:23:58.635 "nvme_io_md": false, 00:23:58.635 "write_zeroes": true, 00:23:58.635 "zcopy": false, 00:23:58.635 "get_zone_info": false, 00:23:58.635 "zone_management": false, 00:23:58.635 "zone_append": false, 00:23:58.635 "compare": true, 00:23:58.635 "compare_and_write": true, 00:23:58.635 "abort": true, 00:23:58.635 "seek_hole": false, 00:23:58.635 "seek_data": false, 00:23:58.635 "copy": true, 00:23:58.635 "nvme_iov_md": false 00:23:58.635 }, 00:23:58.635 "memory_domains": [ 00:23:58.635 { 00:23:58.635 "dma_device_id": "system", 00:23:58.635 "dma_device_type": 1 00:23:58.635 } 00:23:58.635 ], 00:23:58.635 "driver_specific": { 00:23:58.635 "nvme": [ 00:23:58.635 { 00:23:58.635 "trid": { 00:23:58.635 "trtype": "TCP", 00:23:58.635 "adrfam": "IPv4", 00:23:58.635 "traddr": "10.0.0.2", 00:23:58.635 "trsvcid": "4420", 00:23:58.635 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:58.635 }, 00:23:58.635 "ctrlr_data": { 00:23:58.635 "cntlid": 2, 00:23:58.635 "vendor_id": "0x8086", 00:23:58.635 "model_number": "SPDK bdev Controller", 00:23:58.635 "serial_number": "00000000000000000000", 00:23:58.635 "firmware_revision": "24.09", 00:23:58.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:58.635 "oacs": { 00:23:58.635 "security": 0, 00:23:58.635 "format": 0, 00:23:58.635 "firmware": 0, 00:23:58.635 "ns_manage": 0 00:23:58.635 }, 00:23:58.635 "multi_ctrlr": true, 00:23:58.635 "ana_reporting": false 00:23:58.635 }, 00:23:58.635 "vs": { 00:23:58.635 "nvme_version": "1.3" 00:23:58.635 }, 00:23:58.635 "ns_data": { 00:23:58.635 "id": 1, 00:23:58.635 "can_share": true 00:23:58.635 } 00:23:58.635 } 00:23:58.635 ], 00:23:58.635 "mp_policy": "active_passive" 00:23:58.635 } 00:23:58.635 } 00:23:58.635 ] 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.wdKquNcxvr 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.wdKquNcxvr 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.635 [2024-07-15 13:54:25.121573] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:58.635 [2024-07-15 13:54:25.121690] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wdKquNcxvr 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.635 [2024-07-15 13:54:25.133600] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wdKquNcxvr 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.635 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.635 [2024-07-15 13:54:25.145651] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:58.635 [2024-07-15 13:54:25.145688] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:58.896 nvme0n1 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.896 [ 00:23:58.896 { 00:23:58.896 "name": "nvme0n1", 00:23:58.896 "aliases": [ 00:23:58.896 "9b1e66ac-3d84-49a0-8e86-892008d75b68" 00:23:58.896 ], 00:23:58.896 "product_name": "NVMe disk", 00:23:58.896 "block_size": 512, 00:23:58.896 "num_blocks": 2097152, 00:23:58.896 "uuid": "9b1e66ac-3d84-49a0-8e86-892008d75b68", 00:23:58.896 "assigned_rate_limits": { 00:23:58.896 "rw_ios_per_sec": 0, 00:23:58.896 "rw_mbytes_per_sec": 0, 00:23:58.896 "r_mbytes_per_sec": 0, 00:23:58.896 "w_mbytes_per_sec": 0 00:23:58.896 }, 00:23:58.896 "claimed": false, 00:23:58.896 "zoned": false, 00:23:58.896 "supported_io_types": { 00:23:58.896 "read": true, 00:23:58.896 "write": true, 00:23:58.896 "unmap": false, 00:23:58.896 "flush": true, 00:23:58.896 "reset": true, 00:23:58.896 "nvme_admin": true, 00:23:58.896 "nvme_io": true, 00:23:58.896 "nvme_io_md": false, 00:23:58.896 "write_zeroes": true, 00:23:58.896 "zcopy": false, 00:23:58.896 "get_zone_info": false, 00:23:58.896 "zone_management": false, 00:23:58.896 "zone_append": false, 00:23:58.896 "compare": true, 00:23:58.896 "compare_and_write": true, 00:23:58.896 "abort": true, 00:23:58.896 "seek_hole": false, 00:23:58.896 "seek_data": false, 00:23:58.896 "copy": true, 00:23:58.896 "nvme_iov_md": false 00:23:58.896 }, 00:23:58.896 "memory_domains": [ 00:23:58.896 { 00:23:58.896 "dma_device_id": "system", 00:23:58.896 "dma_device_type": 1 00:23:58.896 } 00:23:58.896 ], 00:23:58.896 "driver_specific": { 00:23:58.896 "nvme": [ 00:23:58.896 { 00:23:58.896 "trid": { 00:23:58.896 "trtype": "TCP", 00:23:58.896 "adrfam": "IPv4", 00:23:58.896 "traddr": "10.0.0.2", 00:23:58.896 "trsvcid": "4421", 00:23:58.896 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:58.896 }, 00:23:58.896 "ctrlr_data": { 00:23:58.896 "cntlid": 3, 00:23:58.896 "vendor_id": "0x8086", 00:23:58.896 "model_number": "SPDK bdev Controller", 00:23:58.896 "serial_number": "00000000000000000000", 00:23:58.896 "firmware_revision": "24.09", 00:23:58.896 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:58.896 "oacs": { 00:23:58.896 "security": 0, 00:23:58.896 "format": 0, 00:23:58.896 "firmware": 0, 00:23:58.896 "ns_manage": 0 00:23:58.896 }, 00:23:58.896 "multi_ctrlr": true, 00:23:58.896 "ana_reporting": false 00:23:58.896 }, 00:23:58.896 "vs": { 00:23:58.896 "nvme_version": "1.3" 00:23:58.896 }, 00:23:58.896 "ns_data": { 00:23:58.896 "id": 1, 00:23:58.896 "can_share": true 00:23:58.896 } 00:23:58.896 } 00:23:58.896 ], 00:23:58.896 "mp_policy": "active_passive" 00:23:58.896 } 00:23:58.896 } 00:23:58.896 ] 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.wdKquNcxvr 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:58.896 rmmod nvme_tcp 00:23:58.896 rmmod nvme_fabrics 00:23:58.896 rmmod nvme_keyring 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1180068 ']' 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1180068 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1180068 ']' 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1180068 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1180068 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1180068' 00:23:58.896 killing process with pid 1180068 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1180068 00:23:58.896 [2024-07-15 13:54:25.382532] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:58.896 [2024-07-15 13:54:25.382558] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:58.896 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1180068 00:23:59.157 13:54:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:59.157 13:54:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:59.157 13:54:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:59.157 13:54:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:59.157 13:54:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:59.157 13:54:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.157 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.157 13:54:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.070 13:54:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:01.070 00:24:01.070 real 0m11.216s 00:24:01.070 user 0m3.963s 00:24:01.070 sys 0m5.715s 00:24:01.070 13:54:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:01.070 13:54:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.070 ************************************ 00:24:01.070 END TEST nvmf_async_init 00:24:01.070 ************************************ 00:24:01.331 13:54:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:01.331 13:54:27 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:01.331 13:54:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:01.331 13:54:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:01.331 13:54:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:01.331 ************************************ 00:24:01.331 START TEST dma 00:24:01.331 ************************************ 00:24:01.331 13:54:27 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:01.331 * Looking for test storage... 00:24:01.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:01.331 13:54:27 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.331 13:54:27 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.331 13:54:27 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.331 13:54:27 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.331 13:54:27 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.331 13:54:27 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.331 13:54:27 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.331 13:54:27 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:01.331 13:54:27 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:01.331 13:54:27 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:01.331 13:54:27 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:01.331 13:54:27 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:01.331 00:24:01.331 real 0m0.107s 00:24:01.331 user 0m0.043s 00:24:01.331 sys 0m0.069s 00:24:01.331 13:54:27 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:01.331 13:54:27 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:01.331 ************************************ 00:24:01.331 END TEST dma 00:24:01.331 ************************************ 00:24:01.331 13:54:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:01.331 13:54:27 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:01.331 13:54:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:01.331 13:54:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:01.331 13:54:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:01.331 ************************************ 00:24:01.331 START TEST nvmf_identify 00:24:01.331 ************************************ 00:24:01.331 13:54:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:01.592 * Looking for test storage... 00:24:01.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:01.592 13:54:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:09.779 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:09.779 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:09.779 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:09.779 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:09.779 13:54:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.779 13:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.779 13:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.779 13:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:09.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:24:09.780 00:24:09.780 --- 10.0.0.2 ping statistics --- 00:24:09.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.780 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.434 ms 00:24:09.780 00:24:09.780 --- 10.0.0.1 ping statistics --- 00:24:09.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.780 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1184657 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1184657 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1184657 ']' 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.780 13:54:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.780 [2024-07-15 13:54:35.236109] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:09.780 [2024-07-15 13:54:35.236189] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.780 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.780 [2024-07-15 13:54:35.308211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:09.780 [2024-07-15 13:54:35.388289] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.780 [2024-07-15 13:54:35.388326] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.780 [2024-07-15 13:54:35.388334] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.780 [2024-07-15 13:54:35.388340] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.780 [2024-07-15 13:54:35.388346] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.780 [2024-07-15 13:54:35.388881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.780 [2024-07-15 13:54:35.388964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.780 [2024-07-15 13:54:35.389119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.780 [2024-07-15 13:54:35.389121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.780 [2024-07-15 13:54:36.030687] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.780 Malloc0 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.780 [2024-07-15 13:54:36.130154] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.780 [ 00:24:09.780 { 00:24:09.780 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:09.780 "subtype": "Discovery", 00:24:09.780 "listen_addresses": [ 00:24:09.780 { 00:24:09.780 "trtype": "TCP", 00:24:09.780 "adrfam": "IPv4", 00:24:09.780 "traddr": "10.0.0.2", 00:24:09.780 "trsvcid": "4420" 00:24:09.780 } 00:24:09.780 ], 00:24:09.780 "allow_any_host": true, 00:24:09.780 "hosts": [] 00:24:09.780 }, 00:24:09.780 { 00:24:09.780 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.780 "subtype": "NVMe", 00:24:09.780 "listen_addresses": [ 00:24:09.780 { 00:24:09.780 "trtype": "TCP", 00:24:09.780 "adrfam": "IPv4", 00:24:09.780 "traddr": "10.0.0.2", 00:24:09.780 "trsvcid": "4420" 00:24:09.780 } 00:24:09.780 ], 00:24:09.780 "allow_any_host": true, 00:24:09.780 "hosts": [], 00:24:09.780 "serial_number": "SPDK00000000000001", 00:24:09.780 "model_number": "SPDK bdev Controller", 00:24:09.780 "max_namespaces": 32, 00:24:09.780 "min_cntlid": 1, 00:24:09.780 "max_cntlid": 65519, 00:24:09.780 "namespaces": [ 00:24:09.780 { 00:24:09.780 "nsid": 1, 00:24:09.780 "bdev_name": "Malloc0", 00:24:09.780 "name": "Malloc0", 00:24:09.780 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:09.780 "eui64": "ABCDEF0123456789", 00:24:09.780 "uuid": "b4449826-0e97-494b-a637-3cd8999a1596" 00:24:09.780 } 00:24:09.780 ] 00:24:09.780 } 00:24:09.780 ] 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.780 13:54:36 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:09.780 [2024-07-15 13:54:36.191875] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:09.780 [2024-07-15 13:54:36.191940] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184776 ] 00:24:09.780 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.780 [2024-07-15 13:54:36.224813] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:09.780 [2024-07-15 13:54:36.224864] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:09.780 [2024-07-15 13:54:36.224870] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:09.780 [2024-07-15 13:54:36.224882] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:09.780 [2024-07-15 13:54:36.224888] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:09.780 [2024-07-15 13:54:36.228163] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:09.780 [2024-07-15 13:54:36.228196] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c05ec0 0 00:24:09.780 [2024-07-15 13:54:36.236136] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:09.780 [2024-07-15 13:54:36.236149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:09.780 [2024-07-15 13:54:36.236154] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:09.780 [2024-07-15 13:54:36.236157] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:09.780 [2024-07-15 13:54:36.236196] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.780 [2024-07-15 13:54:36.236202] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.780 [2024-07-15 13:54:36.236209] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c05ec0) 00:24:09.780 [2024-07-15 13:54:36.236224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:09.780 [2024-07-15 13:54:36.236241] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c88e40, cid 0, qid 0 00:24:09.780 [2024-07-15 13:54:36.244135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.780 [2024-07-15 13:54:36.244144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.781 [2024-07-15 13:54:36.244148] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.244152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c88e40) on tqpair=0x1c05ec0 00:24:09.781 [2024-07-15 13:54:36.244165] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:09.781 [2024-07-15 13:54:36.244173] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:09.781 [2024-07-15 13:54:36.244178] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:09.781 [2024-07-15 13:54:36.244192] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.244196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.244199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c05ec0) 00:24:09.781 [2024-07-15 13:54:36.244207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.781 [2024-07-15 13:54:36.244220] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c88e40, cid 0, qid 0 00:24:09.781 [2024-07-15 13:54:36.244444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.781 [2024-07-15 13:54:36.244452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.781 [2024-07-15 13:54:36.244455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.244459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c88e40) on tqpair=0x1c05ec0 00:24:09.781 [2024-07-15 13:54:36.244465] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:09.781 [2024-07-15 13:54:36.244473] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:09.781 [2024-07-15 13:54:36.244480] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.244484] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.244487] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c05ec0) 00:24:09.781 [2024-07-15 13:54:36.244494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.781 [2024-07-15 13:54:36.244505] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c88e40, cid 0, qid 0 00:24:09.781 [2024-07-15 13:54:36.244711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.781 [2024-07-15 13:54:36.244718] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.781 [2024-07-15 13:54:36.244721] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.244725] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c88e40) on tqpair=0x1c05ec0 00:24:09.781 [2024-07-15 13:54:36.244731] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:09.781 [2024-07-15 13:54:36.244738] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:09.781 [2024-07-15 13:54:36.244744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.244748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.244751] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c05ec0) 00:24:09.781 [2024-07-15 13:54:36.244762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.781 [2024-07-15 13:54:36.244772] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c88e40, cid 0, qid 0 00:24:09.781 [2024-07-15 13:54:36.244955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.781 [2024-07-15 13:54:36.244961] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.781 [2024-07-15 13:54:36.244965] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.244968] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c88e40) on tqpair=0x1c05ec0 00:24:09.781 [2024-07-15 13:54:36.244973] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:09.781 [2024-07-15 13:54:36.244982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.244986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.244990] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c05ec0) 00:24:09.781 [2024-07-15 13:54:36.244996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.781 [2024-07-15 13:54:36.245006] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c88e40, cid 0, qid 0 00:24:09.781 [2024-07-15 13:54:36.245229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.781 [2024-07-15 13:54:36.245236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.781 [2024-07-15 13:54:36.245239] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.245243] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c88e40) on tqpair=0x1c05ec0 00:24:09.781 [2024-07-15 13:54:36.245248] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:09.781 [2024-07-15 13:54:36.245252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:09.781 [2024-07-15 13:54:36.245259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:09.781 [2024-07-15 13:54:36.245365] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:09.781 [2024-07-15 13:54:36.245369] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:09.781 [2024-07-15 13:54:36.245378] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.245382] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.245385] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c05ec0) 00:24:09.781 [2024-07-15 13:54:36.245392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.781 [2024-07-15 13:54:36.245402] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c88e40, cid 0, qid 0 00:24:09.781 [2024-07-15 13:54:36.245629] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.781 [2024-07-15 13:54:36.245635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.781 [2024-07-15 13:54:36.245639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.245642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c88e40) on tqpair=0x1c05ec0 00:24:09.781 [2024-07-15 13:54:36.245647] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:09.781 [2024-07-15 13:54:36.245656] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.245662] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.245666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c05ec0) 00:24:09.781 [2024-07-15 13:54:36.245673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.781 [2024-07-15 13:54:36.245682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c88e40, cid 0, qid 0 00:24:09.781 [2024-07-15 13:54:36.245910] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.781 [2024-07-15 13:54:36.245917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.781 [2024-07-15 13:54:36.245920] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.245924] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c88e40) on tqpair=0x1c05ec0 00:24:09.781 [2024-07-15 13:54:36.245928] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:09.781 [2024-07-15 13:54:36.245933] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:09.781 [2024-07-15 13:54:36.245940] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:09.781 [2024-07-15 13:54:36.245948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:09.781 [2024-07-15 13:54:36.245957] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.245961] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c05ec0) 00:24:09.781 [2024-07-15 13:54:36.245968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.781 [2024-07-15 13:54:36.245978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c88e40, cid 0, qid 0 00:24:09.781 [2024-07-15 13:54:36.246223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.781 [2024-07-15 13:54:36.246231] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.781 [2024-07-15 13:54:36.246234] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.246238] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c05ec0): datao=0, datal=4096, cccid=0 00:24:09.781 [2024-07-15 13:54:36.246243] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c88e40) on tqpair(0x1c05ec0): expected_datao=0, payload_size=4096 00:24:09.781 [2024-07-15 13:54:36.246247] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.246255] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.246259] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.246473] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.781 [2024-07-15 13:54:36.246479] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.781 [2024-07-15 13:54:36.246482] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.246486] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c88e40) on tqpair=0x1c05ec0 00:24:09.781 [2024-07-15 13:54:36.246494] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:09.781 [2024-07-15 13:54:36.246501] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:09.781 [2024-07-15 13:54:36.246506] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:09.781 [2024-07-15 13:54:36.246511] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:09.781 [2024-07-15 13:54:36.246515] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:09.781 [2024-07-15 13:54:36.246522] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:09.781 [2024-07-15 13:54:36.246530] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:09.781 [2024-07-15 13:54:36.246537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.246540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.246544] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c05ec0) 00:24:09.781 [2024-07-15 13:54:36.246551] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:09.781 [2024-07-15 13:54:36.246562] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c88e40, cid 0, qid 0 00:24:09.781 [2024-07-15 13:54:36.246784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.781 [2024-07-15 13:54:36.246790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.781 [2024-07-15 13:54:36.246794] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.781 [2024-07-15 13:54:36.246798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c88e40) on tqpair=0x1c05ec0 00:24:09.782 [2024-07-15 13:54:36.246806] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.246809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.246813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c05ec0) 00:24:09.782 [2024-07-15 13:54:36.246819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.782 [2024-07-15 13:54:36.246825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.246829] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.246832] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c05ec0) 00:24:09.782 [2024-07-15 13:54:36.246838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.782 [2024-07-15 13:54:36.246844] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.246847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.246851] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c05ec0) 00:24:09.782 [2024-07-15 13:54:36.246857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.782 [2024-07-15 13:54:36.246862] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.246866] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.246869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c05ec0) 00:24:09.782 [2024-07-15 13:54:36.246875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.782 [2024-07-15 13:54:36.246880] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:09.782 [2024-07-15 13:54:36.246890] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:09.782 [2024-07-15 13:54:36.246897] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.246900] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c05ec0) 00:24:09.782 [2024-07-15 13:54:36.246907] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.782 [2024-07-15 13:54:36.246920] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c88e40, cid 0, qid 0 00:24:09.782 [2024-07-15 13:54:36.246925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c88fc0, cid 1, qid 0 00:24:09.782 [2024-07-15 13:54:36.246930] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c89140, cid 2, qid 0 00:24:09.782 [2024-07-15 13:54:36.246935] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c892c0, cid 3, qid 0 00:24:09.782 [2024-07-15 13:54:36.246939] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c89440, cid 4, qid 0 00:24:09.782 [2024-07-15 13:54:36.247210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.782 [2024-07-15 13:54:36.247217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.782 [2024-07-15 13:54:36.247220] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.247224] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c89440) on tqpair=0x1c05ec0 00:24:09.782 [2024-07-15 13:54:36.247230] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:09.782 [2024-07-15 13:54:36.247235] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:09.782 [2024-07-15 13:54:36.247245] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.247249] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c05ec0) 00:24:09.782 [2024-07-15 13:54:36.247255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.782 [2024-07-15 13:54:36.247265] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c89440, cid 4, qid 0 00:24:09.782 [2024-07-15 13:54:36.247485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.782 [2024-07-15 13:54:36.247493] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.782 [2024-07-15 13:54:36.247496] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.247500] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c05ec0): datao=0, datal=4096, cccid=4 00:24:09.782 [2024-07-15 13:54:36.247504] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c89440) on tqpair(0x1c05ec0): expected_datao=0, payload_size=4096 00:24:09.782 [2024-07-15 13:54:36.247509] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.247549] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.247553] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.292133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.782 [2024-07-15 13:54:36.292144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.782 [2024-07-15 13:54:36.292148] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.292152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c89440) on tqpair=0x1c05ec0 00:24:09.782 [2024-07-15 13:54:36.292165] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:09.782 [2024-07-15 13:54:36.292191] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.292195] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c05ec0) 00:24:09.782 [2024-07-15 13:54:36.292203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.782 [2024-07-15 13:54:36.292210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.292214] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.292217] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c05ec0) 00:24:09.782 [2024-07-15 13:54:36.292223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.782 [2024-07-15 13:54:36.292241] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c89440, cid 4, qid 0 00:24:09.782 [2024-07-15 13:54:36.292247] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c895c0, cid 5, qid 0 00:24:09.782 [2024-07-15 13:54:36.292482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.782 [2024-07-15 13:54:36.292489] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.782 [2024-07-15 13:54:36.292492] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.292496] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c05ec0): datao=0, datal=1024, cccid=4 00:24:09.782 [2024-07-15 13:54:36.292500] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c89440) on tqpair(0x1c05ec0): expected_datao=0, payload_size=1024 00:24:09.782 [2024-07-15 13:54:36.292505] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.292511] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.292515] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.292521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.782 [2024-07-15 13:54:36.292526] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.782 [2024-07-15 13:54:36.292529] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.782 [2024-07-15 13:54:36.292533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c895c0) on tqpair=0x1c05ec0 00:24:10.046 [2024-07-15 13:54:36.333377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.046 [2024-07-15 13:54:36.333391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.046 [2024-07-15 13:54:36.333394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.046 [2024-07-15 13:54:36.333398] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c89440) on tqpair=0x1c05ec0 00:24:10.046 [2024-07-15 13:54:36.333417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.046 [2024-07-15 13:54:36.333421] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c05ec0) 00:24:10.046 [2024-07-15 13:54:36.333428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.046 [2024-07-15 13:54:36.333444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c89440, cid 4, qid 0 00:24:10.046 [2024-07-15 13:54:36.333586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.046 [2024-07-15 13:54:36.333593] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.046 [2024-07-15 13:54:36.333596] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.046 [2024-07-15 13:54:36.333600] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c05ec0): datao=0, datal=3072, cccid=4 00:24:10.046 [2024-07-15 13:54:36.333604] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c89440) on tqpair(0x1c05ec0): expected_datao=0, payload_size=3072 00:24:10.046 [2024-07-15 13:54:36.333608] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.046 [2024-07-15 13:54:36.333615] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.046 [2024-07-15 13:54:36.333618] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.046 [2024-07-15 13:54:36.333804] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.046 [2024-07-15 13:54:36.333811] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.046 [2024-07-15 13:54:36.333814] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.046 [2024-07-15 13:54:36.333818] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c89440) on tqpair=0x1c05ec0 00:24:10.046 [2024-07-15 13:54:36.333826] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.046 [2024-07-15 13:54:36.333830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c05ec0) 00:24:10.046 [2024-07-15 13:54:36.333836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.046 [2024-07-15 13:54:36.333852] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c89440, cid 4, qid 0 00:24:10.046 [2024-07-15 13:54:36.334105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.046 [2024-07-15 13:54:36.334111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.046 [2024-07-15 13:54:36.334115] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.046 [2024-07-15 13:54:36.334118] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c05ec0): datao=0, datal=8, cccid=4 00:24:10.046 [2024-07-15 13:54:36.334129] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c89440) on tqpair(0x1c05ec0): expected_datao=0, payload_size=8 00:24:10.046 [2024-07-15 13:54:36.334133] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.046 [2024-07-15 13:54:36.334140] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.046 [2024-07-15 13:54:36.334143] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.046 [2024-07-15 13:54:36.374327] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.046 [2024-07-15 13:54:36.374337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.046 [2024-07-15 13:54:36.374341] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.046 [2024-07-15 13:54:36.374345] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c89440) on tqpair=0x1c05ec0 00:24:10.046 ===================================================== 00:24:10.046 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:10.046 ===================================================== 00:24:10.046 Controller Capabilities/Features 00:24:10.046 ================================ 00:24:10.046 Vendor ID: 0000 00:24:10.046 Subsystem Vendor ID: 0000 00:24:10.046 Serial Number: .................... 00:24:10.046 Model Number: ........................................ 00:24:10.046 Firmware Version: 24.09 00:24:10.046 Recommended Arb Burst: 0 00:24:10.046 IEEE OUI Identifier: 00 00 00 00:24:10.046 Multi-path I/O 00:24:10.046 May have multiple subsystem ports: No 00:24:10.046 May have multiple controllers: No 00:24:10.046 Associated with SR-IOV VF: No 00:24:10.046 Max Data Transfer Size: 131072 00:24:10.046 Max Number of Namespaces: 0 00:24:10.046 Max Number of I/O Queues: 1024 00:24:10.046 NVMe Specification Version (VS): 1.3 00:24:10.046 NVMe Specification Version (Identify): 1.3 00:24:10.046 Maximum Queue Entries: 128 00:24:10.046 Contiguous Queues Required: Yes 00:24:10.046 Arbitration Mechanisms Supported 00:24:10.046 Weighted Round Robin: Not Supported 00:24:10.046 Vendor Specific: Not Supported 00:24:10.046 Reset Timeout: 15000 ms 00:24:10.046 Doorbell Stride: 4 bytes 00:24:10.046 NVM Subsystem Reset: Not Supported 00:24:10.046 Command Sets Supported 00:24:10.046 NVM Command Set: Supported 00:24:10.046 Boot Partition: Not Supported 00:24:10.046 Memory Page Size Minimum: 4096 bytes 00:24:10.046 Memory Page Size Maximum: 4096 bytes 00:24:10.046 Persistent Memory Region: Not Supported 00:24:10.046 Optional Asynchronous Events Supported 00:24:10.046 Namespace Attribute Notices: Not Supported 00:24:10.046 Firmware Activation Notices: Not Supported 00:24:10.046 ANA Change Notices: Not Supported 00:24:10.046 PLE Aggregate Log Change Notices: Not Supported 00:24:10.046 LBA Status Info Alert Notices: Not Supported 00:24:10.046 EGE Aggregate Log Change Notices: Not Supported 00:24:10.046 Normal NVM Subsystem Shutdown event: Not Supported 00:24:10.046 Zone Descriptor Change Notices: Not Supported 00:24:10.046 Discovery Log Change Notices: Supported 00:24:10.046 Controller Attributes 00:24:10.046 128-bit Host Identifier: Not Supported 00:24:10.046 Non-Operational Permissive Mode: Not Supported 00:24:10.046 NVM Sets: Not Supported 00:24:10.046 Read Recovery Levels: Not Supported 00:24:10.046 Endurance Groups: Not Supported 00:24:10.046 Predictable Latency Mode: Not Supported 00:24:10.046 Traffic Based Keep ALive: Not Supported 00:24:10.046 Namespace Granularity: Not Supported 00:24:10.046 SQ Associations: Not Supported 00:24:10.046 UUID List: Not Supported 00:24:10.046 Multi-Domain Subsystem: Not Supported 00:24:10.046 Fixed Capacity Management: Not Supported 00:24:10.046 Variable Capacity Management: Not Supported 00:24:10.046 Delete Endurance Group: Not Supported 00:24:10.046 Delete NVM Set: Not Supported 00:24:10.046 Extended LBA Formats Supported: Not Supported 00:24:10.046 Flexible Data Placement Supported: Not Supported 00:24:10.046 00:24:10.046 Controller Memory Buffer Support 00:24:10.046 ================================ 00:24:10.046 Supported: No 00:24:10.046 00:24:10.046 Persistent Memory Region Support 00:24:10.046 ================================ 00:24:10.046 Supported: No 00:24:10.046 00:24:10.046 Admin Command Set Attributes 00:24:10.046 ============================ 00:24:10.046 Security Send/Receive: Not Supported 00:24:10.046 Format NVM: Not Supported 00:24:10.046 Firmware Activate/Download: Not Supported 00:24:10.046 Namespace Management: Not Supported 00:24:10.046 Device Self-Test: Not Supported 00:24:10.046 Directives: Not Supported 00:24:10.046 NVMe-MI: Not Supported 00:24:10.046 Virtualization Management: Not Supported 00:24:10.046 Doorbell Buffer Config: Not Supported 00:24:10.046 Get LBA Status Capability: Not Supported 00:24:10.046 Command & Feature Lockdown Capability: Not Supported 00:24:10.046 Abort Command Limit: 1 00:24:10.046 Async Event Request Limit: 4 00:24:10.046 Number of Firmware Slots: N/A 00:24:10.046 Firmware Slot 1 Read-Only: N/A 00:24:10.046 Firmware Activation Without Reset: N/A 00:24:10.046 Multiple Update Detection Support: N/A 00:24:10.046 Firmware Update Granularity: No Information Provided 00:24:10.046 Per-Namespace SMART Log: No 00:24:10.046 Asymmetric Namespace Access Log Page: Not Supported 00:24:10.046 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:10.046 Command Effects Log Page: Not Supported 00:24:10.046 Get Log Page Extended Data: Supported 00:24:10.046 Telemetry Log Pages: Not Supported 00:24:10.046 Persistent Event Log Pages: Not Supported 00:24:10.046 Supported Log Pages Log Page: May Support 00:24:10.046 Commands Supported & Effects Log Page: Not Supported 00:24:10.046 Feature Identifiers & Effects Log Page:May Support 00:24:10.046 NVMe-MI Commands & Effects Log Page: May Support 00:24:10.046 Data Area 4 for Telemetry Log: Not Supported 00:24:10.046 Error Log Page Entries Supported: 128 00:24:10.046 Keep Alive: Not Supported 00:24:10.046 00:24:10.046 NVM Command Set Attributes 00:24:10.046 ========================== 00:24:10.046 Submission Queue Entry Size 00:24:10.046 Max: 1 00:24:10.046 Min: 1 00:24:10.046 Completion Queue Entry Size 00:24:10.046 Max: 1 00:24:10.046 Min: 1 00:24:10.046 Number of Namespaces: 0 00:24:10.046 Compare Command: Not Supported 00:24:10.046 Write Uncorrectable Command: Not Supported 00:24:10.046 Dataset Management Command: Not Supported 00:24:10.046 Write Zeroes Command: Not Supported 00:24:10.046 Set Features Save Field: Not Supported 00:24:10.046 Reservations: Not Supported 00:24:10.046 Timestamp: Not Supported 00:24:10.046 Copy: Not Supported 00:24:10.046 Volatile Write Cache: Not Present 00:24:10.046 Atomic Write Unit (Normal): 1 00:24:10.046 Atomic Write Unit (PFail): 1 00:24:10.046 Atomic Compare & Write Unit: 1 00:24:10.046 Fused Compare & Write: Supported 00:24:10.046 Scatter-Gather List 00:24:10.046 SGL Command Set: Supported 00:24:10.046 SGL Keyed: Supported 00:24:10.047 SGL Bit Bucket Descriptor: Not Supported 00:24:10.047 SGL Metadata Pointer: Not Supported 00:24:10.047 Oversized SGL: Not Supported 00:24:10.047 SGL Metadata Address: Not Supported 00:24:10.047 SGL Offset: Supported 00:24:10.047 Transport SGL Data Block: Not Supported 00:24:10.047 Replay Protected Memory Block: Not Supported 00:24:10.047 00:24:10.047 Firmware Slot Information 00:24:10.047 ========================= 00:24:10.047 Active slot: 0 00:24:10.047 00:24:10.047 00:24:10.047 Error Log 00:24:10.047 ========= 00:24:10.047 00:24:10.047 Active Namespaces 00:24:10.047 ================= 00:24:10.047 Discovery Log Page 00:24:10.047 ================== 00:24:10.047 Generation Counter: 2 00:24:10.047 Number of Records: 2 00:24:10.047 Record Format: 0 00:24:10.047 00:24:10.047 Discovery Log Entry 0 00:24:10.047 ---------------------- 00:24:10.047 Transport Type: 3 (TCP) 00:24:10.047 Address Family: 1 (IPv4) 00:24:10.047 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:10.047 Entry Flags: 00:24:10.047 Duplicate Returned Information: 1 00:24:10.047 Explicit Persistent Connection Support for Discovery: 1 00:24:10.047 Transport Requirements: 00:24:10.047 Secure Channel: Not Required 00:24:10.047 Port ID: 0 (0x0000) 00:24:10.047 Controller ID: 65535 (0xffff) 00:24:10.047 Admin Max SQ Size: 128 00:24:10.047 Transport Service Identifier: 4420 00:24:10.047 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:10.047 Transport Address: 10.0.0.2 00:24:10.047 Discovery Log Entry 1 00:24:10.047 ---------------------- 00:24:10.047 Transport Type: 3 (TCP) 00:24:10.047 Address Family: 1 (IPv4) 00:24:10.047 Subsystem Type: 2 (NVM Subsystem) 00:24:10.047 Entry Flags: 00:24:10.047 Duplicate Returned Information: 0 00:24:10.047 Explicit Persistent Connection Support for Discovery: 0 00:24:10.047 Transport Requirements: 00:24:10.047 Secure Channel: Not Required 00:24:10.047 Port ID: 0 (0x0000) 00:24:10.047 Controller ID: 65535 (0xffff) 00:24:10.047 Admin Max SQ Size: 128 00:24:10.047 Transport Service Identifier: 4420 00:24:10.047 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:10.047 Transport Address: 10.0.0.2 [2024-07-15 13:54:36.374433] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:10.047 [2024-07-15 13:54:36.374443] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c88e40) on tqpair=0x1c05ec0 00:24:10.047 [2024-07-15 13:54:36.374450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.047 [2024-07-15 13:54:36.374456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c88fc0) on tqpair=0x1c05ec0 00:24:10.047 [2024-07-15 13:54:36.374460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.047 [2024-07-15 13:54:36.374465] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c89140) on tqpair=0x1c05ec0 00:24:10.047 [2024-07-15 13:54:36.374470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.047 [2024-07-15 13:54:36.374475] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c892c0) on tqpair=0x1c05ec0 00:24:10.047 [2024-07-15 13:54:36.374479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.047 [2024-07-15 13:54:36.374489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.374493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.374496] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c05ec0) 00:24:10.047 [2024-07-15 13:54:36.374504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.047 [2024-07-15 13:54:36.374517] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c892c0, cid 3, qid 0 00:24:10.047 [2024-07-15 13:54:36.374631] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.047 [2024-07-15 13:54:36.374638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.047 [2024-07-15 13:54:36.374641] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.374645] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c892c0) on tqpair=0x1c05ec0 00:24:10.047 [2024-07-15 13:54:36.374652] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.374656] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.374660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c05ec0) 00:24:10.047 [2024-07-15 13:54:36.374666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.047 [2024-07-15 13:54:36.374682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c892c0, cid 3, qid 0 00:24:10.047 [2024-07-15 13:54:36.374890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.047 [2024-07-15 13:54:36.374896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.047 [2024-07-15 13:54:36.374900] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.374904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c892c0) on tqpair=0x1c05ec0 00:24:10.047 [2024-07-15 13:54:36.374908] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:10.047 [2024-07-15 13:54:36.374913] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:10.047 [2024-07-15 13:54:36.374922] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.374926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.374929] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c05ec0) 00:24:10.047 [2024-07-15 13:54:36.374936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.047 [2024-07-15 13:54:36.374946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c892c0, cid 3, qid 0 00:24:10.047 [2024-07-15 13:54:36.375177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.047 [2024-07-15 13:54:36.375184] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.047 [2024-07-15 13:54:36.375187] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.375191] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c892c0) on tqpair=0x1c05ec0 00:24:10.047 [2024-07-15 13:54:36.375201] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.375204] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.375208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c05ec0) 00:24:10.047 [2024-07-15 13:54:36.375214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.047 [2024-07-15 13:54:36.375224] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c892c0, cid 3, qid 0 00:24:10.047 [2024-07-15 13:54:36.375429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.047 [2024-07-15 13:54:36.375435] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.047 [2024-07-15 13:54:36.375438] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.375442] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c892c0) on tqpair=0x1c05ec0 00:24:10.047 [2024-07-15 13:54:36.375451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.375455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.375459] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c05ec0) 00:24:10.047 [2024-07-15 13:54:36.375465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.047 [2024-07-15 13:54:36.375475] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c892c0, cid 3, qid 0 00:24:10.047 [2024-07-15 13:54:36.375702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.047 [2024-07-15 13:54:36.375708] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.047 [2024-07-15 13:54:36.375712] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.375715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c892c0) on tqpair=0x1c05ec0 00:24:10.047 [2024-07-15 13:54:36.375724] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.375728] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.375734] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c05ec0) 00:24:10.047 [2024-07-15 13:54:36.375740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.047 [2024-07-15 13:54:36.375750] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c892c0, cid 3, qid 0 00:24:10.047 [2024-07-15 13:54:36.375975] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.047 [2024-07-15 13:54:36.375982] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.047 [2024-07-15 13:54:36.375985] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.375989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c892c0) on tqpair=0x1c05ec0 00:24:10.047 [2024-07-15 13:54:36.375998] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.376002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.376005] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c05ec0) 00:24:10.047 [2024-07-15 13:54:36.376012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.047 [2024-07-15 13:54:36.376022] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c892c0, cid 3, qid 0 00:24:10.047 [2024-07-15 13:54:36.380131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.047 [2024-07-15 13:54:36.380139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.047 [2024-07-15 13:54:36.380143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.047 [2024-07-15 13:54:36.380147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c892c0) on tqpair=0x1c05ec0 00:24:10.047 [2024-07-15 13:54:36.380154] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:24:10.047 00:24:10.047 13:54:36 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:10.047 [2024-07-15 13:54:36.425342] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:10.047 [2024-07-15 13:54:36.425390] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184882 ] 00:24:10.048 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.048 [2024-07-15 13:54:36.456672] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:10.048 [2024-07-15 13:54:36.456714] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:10.048 [2024-07-15 13:54:36.456720] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:10.048 [2024-07-15 13:54:36.456731] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:10.048 [2024-07-15 13:54:36.456736] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:10.048 [2024-07-15 13:54:36.460158] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:10.048 [2024-07-15 13:54:36.460184] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x169dec0 0 00:24:10.048 [2024-07-15 13:54:36.460446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:10.048 [2024-07-15 13:54:36.460454] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:10.048 [2024-07-15 13:54:36.460458] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:10.048 [2024-07-15 13:54:36.460464] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:10.048 [2024-07-15 13:54:36.460493] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.460499] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.460503] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x169dec0) 00:24:10.048 [2024-07-15 13:54:36.460514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:10.048 [2024-07-15 13:54:36.460529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1720e40, cid 0, qid 0 00:24:10.048 [2024-07-15 13:54:36.468134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.048 [2024-07-15 13:54:36.468143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.048 [2024-07-15 13:54:36.468147] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.468151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1720e40) on tqpair=0x169dec0 00:24:10.048 [2024-07-15 13:54:36.468163] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:10.048 [2024-07-15 13:54:36.468169] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:10.048 [2024-07-15 13:54:36.468174] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:10.048 [2024-07-15 13:54:36.468186] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.468190] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.468193] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x169dec0) 00:24:10.048 [2024-07-15 13:54:36.468201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-15 13:54:36.468214] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1720e40, cid 0, qid 0 00:24:10.048 [2024-07-15 13:54:36.468423] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.048 [2024-07-15 13:54:36.468430] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.048 [2024-07-15 13:54:36.468433] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.468437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1720e40) on tqpair=0x169dec0 00:24:10.048 [2024-07-15 13:54:36.468442] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:10.048 [2024-07-15 13:54:36.468450] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:10.048 [2024-07-15 13:54:36.468456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.468460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.468463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x169dec0) 00:24:10.048 [2024-07-15 13:54:36.468470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-15 13:54:36.468480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1720e40, cid 0, qid 0 00:24:10.048 [2024-07-15 13:54:36.468691] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.048 [2024-07-15 13:54:36.468697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.048 [2024-07-15 13:54:36.468701] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.468705] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1720e40) on tqpair=0x169dec0 00:24:10.048 [2024-07-15 13:54:36.468709] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:10.048 [2024-07-15 13:54:36.468717] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:10.048 [2024-07-15 13:54:36.468726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.468730] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.468733] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x169dec0) 00:24:10.048 [2024-07-15 13:54:36.468740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-15 13:54:36.468750] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1720e40, cid 0, qid 0 00:24:10.048 [2024-07-15 13:54:36.468964] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.048 [2024-07-15 13:54:36.468970] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.048 [2024-07-15 13:54:36.468974] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.468977] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1720e40) on tqpair=0x169dec0 00:24:10.048 [2024-07-15 13:54:36.468982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:10.048 [2024-07-15 13:54:36.468992] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.468996] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.468999] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x169dec0) 00:24:10.048 [2024-07-15 13:54:36.469006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-15 13:54:36.469016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1720e40, cid 0, qid 0 00:24:10.048 [2024-07-15 13:54:36.469228] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.048 [2024-07-15 13:54:36.469235] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.048 [2024-07-15 13:54:36.469238] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.469242] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1720e40) on tqpair=0x169dec0 00:24:10.048 [2024-07-15 13:54:36.469246] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:10.048 [2024-07-15 13:54:36.469251] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:10.048 [2024-07-15 13:54:36.469259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:10.048 [2024-07-15 13:54:36.469364] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:10.048 [2024-07-15 13:54:36.469368] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:10.048 [2024-07-15 13:54:36.469376] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.469380] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.469383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x169dec0) 00:24:10.048 [2024-07-15 13:54:36.469390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-15 13:54:36.469401] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1720e40, cid 0, qid 0 00:24:10.048 [2024-07-15 13:54:36.469606] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.048 [2024-07-15 13:54:36.469613] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.048 [2024-07-15 13:54:36.469616] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.469620] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1720e40) on tqpair=0x169dec0 00:24:10.048 [2024-07-15 13:54:36.469624] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:10.048 [2024-07-15 13:54:36.469636] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.469640] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.469643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x169dec0) 00:24:10.048 [2024-07-15 13:54:36.469650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-15 13:54:36.469660] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1720e40, cid 0, qid 0 00:24:10.048 [2024-07-15 13:54:36.469883] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.048 [2024-07-15 13:54:36.469889] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.048 [2024-07-15 13:54:36.469893] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.469897] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1720e40) on tqpair=0x169dec0 00:24:10.048 [2024-07-15 13:54:36.469901] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:10.048 [2024-07-15 13:54:36.469906] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:10.048 [2024-07-15 13:54:36.469913] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:10.048 [2024-07-15 13:54:36.469921] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:10.048 [2024-07-15 13:54:36.469929] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.469933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x169dec0) 00:24:10.048 [2024-07-15 13:54:36.469940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.048 [2024-07-15 13:54:36.469950] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1720e40, cid 0, qid 0 00:24:10.048 [2024-07-15 13:54:36.470198] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.048 [2024-07-15 13:54:36.470205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.048 [2024-07-15 13:54:36.470209] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.470212] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x169dec0): datao=0, datal=4096, cccid=0 00:24:10.048 [2024-07-15 13:54:36.470217] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1720e40) on tqpair(0x169dec0): expected_datao=0, payload_size=4096 00:24:10.048 [2024-07-15 13:54:36.470221] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.470266] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.470270] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.470462] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.048 [2024-07-15 13:54:36.470468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.048 [2024-07-15 13:54:36.470471] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.048 [2024-07-15 13:54:36.470475] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1720e40) on tqpair=0x169dec0 00:24:10.049 [2024-07-15 13:54:36.470482] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:10.049 [2024-07-15 13:54:36.470489] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:10.049 [2024-07-15 13:54:36.470494] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:10.049 [2024-07-15 13:54:36.470498] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:10.049 [2024-07-15 13:54:36.470504] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:10.049 [2024-07-15 13:54:36.470509] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:10.049 [2024-07-15 13:54:36.470518] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:10.049 [2024-07-15 13:54:36.470524] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.470528] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.470531] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x169dec0) 00:24:10.049 [2024-07-15 13:54:36.470538] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:10.049 [2024-07-15 13:54:36.470549] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1720e40, cid 0, qid 0 00:24:10.049 [2024-07-15 13:54:36.470748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.049 [2024-07-15 13:54:36.470755] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.049 [2024-07-15 13:54:36.470758] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.470762] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1720e40) on tqpair=0x169dec0 00:24:10.049 [2024-07-15 13:54:36.470769] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.470772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.470776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x169dec0) 00:24:10.049 [2024-07-15 13:54:36.470782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.049 [2024-07-15 13:54:36.470788] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.470791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.470795] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x169dec0) 00:24:10.049 [2024-07-15 13:54:36.470800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.049 [2024-07-15 13:54:36.470806] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.470810] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.470813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x169dec0) 00:24:10.049 [2024-07-15 13:54:36.470819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.049 [2024-07-15 13:54:36.470825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.470828] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.470832] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x169dec0) 00:24:10.049 [2024-07-15 13:54:36.470837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.049 [2024-07-15 13:54:36.470842] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:10.049 [2024-07-15 13:54:36.470852] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:10.049 [2024-07-15 13:54:36.470858] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.470862] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x169dec0) 00:24:10.049 [2024-07-15 13:54:36.470869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.049 [2024-07-15 13:54:36.470882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1720e40, cid 0, qid 0 00:24:10.049 [2024-07-15 13:54:36.470887] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1720fc0, cid 1, qid 0 00:24:10.049 [2024-07-15 13:54:36.470892] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1721140, cid 2, qid 0 00:24:10.049 [2024-07-15 13:54:36.470897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17212c0, cid 3, qid 0 00:24:10.049 [2024-07-15 13:54:36.470901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1721440, cid 4, qid 0 00:24:10.049 [2024-07-15 13:54:36.471103] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.049 [2024-07-15 13:54:36.471109] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.049 [2024-07-15 13:54:36.471113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.471117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1721440) on tqpair=0x169dec0 00:24:10.049 [2024-07-15 13:54:36.471127] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:10.049 [2024-07-15 13:54:36.471132] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:10.049 [2024-07-15 13:54:36.471139] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:10.049 [2024-07-15 13:54:36.471146] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:10.049 [2024-07-15 13:54:36.471152] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.471156] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.471159] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x169dec0) 00:24:10.049 [2024-07-15 13:54:36.471166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:10.049 [2024-07-15 13:54:36.471176] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1721440, cid 4, qid 0 00:24:10.049 [2024-07-15 13:54:36.471382] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.049 [2024-07-15 13:54:36.471389] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.049 [2024-07-15 13:54:36.471392] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.471396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1721440) on tqpair=0x169dec0 00:24:10.049 [2024-07-15 13:54:36.471458] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:10.049 [2024-07-15 13:54:36.471467] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:10.049 [2024-07-15 13:54:36.471474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.471478] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x169dec0) 00:24:10.049 [2024-07-15 13:54:36.471484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.049 [2024-07-15 13:54:36.471494] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1721440, cid 4, qid 0 00:24:10.049 [2024-07-15 13:54:36.471730] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.049 [2024-07-15 13:54:36.471737] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.049 [2024-07-15 13:54:36.471740] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.471744] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x169dec0): datao=0, datal=4096, cccid=4 00:24:10.049 [2024-07-15 13:54:36.471748] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1721440) on tqpair(0x169dec0): expected_datao=0, payload_size=4096 00:24:10.049 [2024-07-15 13:54:36.471755] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.471762] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.471766] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.516133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.049 [2024-07-15 13:54:36.516144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.049 [2024-07-15 13:54:36.516147] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.516151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1721440) on tqpair=0x169dec0 00:24:10.049 [2024-07-15 13:54:36.516162] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:10.049 [2024-07-15 13:54:36.516178] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:10.049 [2024-07-15 13:54:36.516188] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:10.049 [2024-07-15 13:54:36.516194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.516198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x169dec0) 00:24:10.049 [2024-07-15 13:54:36.516205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.049 [2024-07-15 13:54:36.516217] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1721440, cid 4, qid 0 00:24:10.049 [2024-07-15 13:54:36.516439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.049 [2024-07-15 13:54:36.516446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.049 [2024-07-15 13:54:36.516449] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.516453] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x169dec0): datao=0, datal=4096, cccid=4 00:24:10.049 [2024-07-15 13:54:36.516457] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1721440) on tqpair(0x169dec0): expected_datao=0, payload_size=4096 00:24:10.049 [2024-07-15 13:54:36.516461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.516579] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.516582] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.557301] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.049 [2024-07-15 13:54:36.557312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.049 [2024-07-15 13:54:36.557316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.557320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1721440) on tqpair=0x169dec0 00:24:10.049 [2024-07-15 13:54:36.557334] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:10.049 [2024-07-15 13:54:36.557344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:10.049 [2024-07-15 13:54:36.557352] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.049 [2024-07-15 13:54:36.557356] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x169dec0) 00:24:10.049 [2024-07-15 13:54:36.557364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.049 [2024-07-15 13:54:36.557376] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1721440, cid 4, qid 0 00:24:10.049 [2024-07-15 13:54:36.557538] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.049 [2024-07-15 13:54:36.557545] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.049 [2024-07-15 13:54:36.557553] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.557557] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x169dec0): datao=0, datal=4096, cccid=4 00:24:10.050 [2024-07-15 13:54:36.557561] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1721440) on tqpair(0x169dec0): expected_datao=0, payload_size=4096 00:24:10.050 [2024-07-15 13:54:36.557565] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.557686] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.557690] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.557884] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.050 [2024-07-15 13:54:36.557891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.050 [2024-07-15 13:54:36.557894] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.557898] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1721440) on tqpair=0x169dec0 00:24:10.050 [2024-07-15 13:54:36.557906] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:10.050 [2024-07-15 13:54:36.557915] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:10.050 [2024-07-15 13:54:36.557923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:10.050 [2024-07-15 13:54:36.557930] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:10.050 [2024-07-15 13:54:36.557935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:10.050 [2024-07-15 13:54:36.557940] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:10.050 [2024-07-15 13:54:36.557945] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:10.050 [2024-07-15 13:54:36.557950] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:10.050 [2024-07-15 13:54:36.557956] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:10.050 [2024-07-15 13:54:36.557971] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.557975] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x169dec0) 00:24:10.050 [2024-07-15 13:54:36.557981] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.050 [2024-07-15 13:54:36.557989] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.557992] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.557996] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x169dec0) 00:24:10.050 [2024-07-15 13:54:36.558002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.050 [2024-07-15 13:54:36.558016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1721440, cid 4, qid 0 00:24:10.050 [2024-07-15 13:54:36.558022] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17215c0, cid 5, qid 0 00:24:10.050 [2024-07-15 13:54:36.558211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.050 [2024-07-15 13:54:36.558218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.050 [2024-07-15 13:54:36.558222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.558226] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1721440) on tqpair=0x169dec0 00:24:10.050 [2024-07-15 13:54:36.558235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.050 [2024-07-15 13:54:36.558241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.050 [2024-07-15 13:54:36.558245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.558249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17215c0) on tqpair=0x169dec0 00:24:10.050 [2024-07-15 13:54:36.558258] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.558262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x169dec0) 00:24:10.050 [2024-07-15 13:54:36.558268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.050 [2024-07-15 13:54:36.558279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17215c0, cid 5, qid 0 00:24:10.050 [2024-07-15 13:54:36.558519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.050 [2024-07-15 13:54:36.558525] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.050 [2024-07-15 13:54:36.558529] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.558533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17215c0) on tqpair=0x169dec0 00:24:10.050 [2024-07-15 13:54:36.558542] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.558546] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x169dec0) 00:24:10.050 [2024-07-15 13:54:36.558552] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.050 [2024-07-15 13:54:36.558562] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17215c0, cid 5, qid 0 00:24:10.050 [2024-07-15 13:54:36.558770] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.050 [2024-07-15 13:54:36.558776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.050 [2024-07-15 13:54:36.558779] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.558783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17215c0) on tqpair=0x169dec0 00:24:10.050 [2024-07-15 13:54:36.558792] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.558796] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x169dec0) 00:24:10.050 [2024-07-15 13:54:36.558802] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.050 [2024-07-15 13:54:36.558812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17215c0, cid 5, qid 0 00:24:10.050 [2024-07-15 13:54:36.559023] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.050 [2024-07-15 13:54:36.559030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.050 [2024-07-15 13:54:36.559033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.559037] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17215c0) on tqpair=0x169dec0 00:24:10.050 [2024-07-15 13:54:36.559051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.559056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x169dec0) 00:24:10.050 [2024-07-15 13:54:36.559062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.050 [2024-07-15 13:54:36.559070] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.559074] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x169dec0) 00:24:10.050 [2024-07-15 13:54:36.559080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.050 [2024-07-15 13:54:36.559087] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.559093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x169dec0) 00:24:10.050 [2024-07-15 13:54:36.559099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.050 [2024-07-15 13:54:36.559107] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.559111] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x169dec0) 00:24:10.050 [2024-07-15 13:54:36.559117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.050 [2024-07-15 13:54:36.563135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17215c0, cid 5, qid 0 00:24:10.050 [2024-07-15 13:54:36.563143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1721440, cid 4, qid 0 00:24:10.050 [2024-07-15 13:54:36.563147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1721740, cid 6, qid 0 00:24:10.050 [2024-07-15 13:54:36.563152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17218c0, cid 7, qid 0 00:24:10.050 [2024-07-15 13:54:36.563390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.050 [2024-07-15 13:54:36.563397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.050 [2024-07-15 13:54:36.563400] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.563404] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x169dec0): datao=0, datal=8192, cccid=5 00:24:10.050 [2024-07-15 13:54:36.563408] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17215c0) on tqpair(0x169dec0): expected_datao=0, payload_size=8192 00:24:10.050 [2024-07-15 13:54:36.563413] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.563509] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.563513] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.563519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.050 [2024-07-15 13:54:36.563525] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.050 [2024-07-15 13:54:36.563528] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.050 [2024-07-15 13:54:36.563532] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x169dec0): datao=0, datal=512, cccid=4 00:24:10.050 [2024-07-15 13:54:36.563536] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1721440) on tqpair(0x169dec0): expected_datao=0, payload_size=512 00:24:10.050 [2024-07-15 13:54:36.563540] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.051 [2024-07-15 13:54:36.563546] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.051 [2024-07-15 13:54:36.563550] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.051 [2024-07-15 13:54:36.563555] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.051 [2024-07-15 13:54:36.563561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.051 [2024-07-15 13:54:36.563564] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.051 [2024-07-15 13:54:36.563568] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x169dec0): datao=0, datal=512, cccid=6 00:24:10.051 [2024-07-15 13:54:36.563572] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1721740) on tqpair(0x169dec0): expected_datao=0, payload_size=512 00:24:10.051 [2024-07-15 13:54:36.563576] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.051 [2024-07-15 13:54:36.563582] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.051 [2024-07-15 13:54:36.563586] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.051 [2024-07-15 13:54:36.563591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.051 [2024-07-15 13:54:36.563597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.051 [2024-07-15 13:54:36.563600] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.051 [2024-07-15 13:54:36.563606] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x169dec0): datao=0, datal=4096, cccid=7 00:24:10.051 [2024-07-15 13:54:36.563610] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17218c0) on tqpair(0x169dec0): expected_datao=0, payload_size=4096 00:24:10.051 [2024-07-15 13:54:36.563615] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.051 [2024-07-15 13:54:36.563639] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.051 [2024-07-15 13:54:36.563643] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.312 [2024-07-15 13:54:36.604380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.312 [2024-07-15 13:54:36.604391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.312 [2024-07-15 13:54:36.604395] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.312 [2024-07-15 13:54:36.604399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17215c0) on tqpair=0x169dec0 00:24:10.312 [2024-07-15 13:54:36.604413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.312 [2024-07-15 13:54:36.604419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.312 [2024-07-15 13:54:36.604423] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.312 [2024-07-15 13:54:36.604426] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1721440) on tqpair=0x169dec0 00:24:10.312 [2024-07-15 13:54:36.604436] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.312 [2024-07-15 13:54:36.604442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.312 [2024-07-15 13:54:36.604445] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.312 [2024-07-15 13:54:36.604449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1721740) on tqpair=0x169dec0 00:24:10.312 [2024-07-15 13:54:36.604456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.312 [2024-07-15 13:54:36.604462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.312 [2024-07-15 13:54:36.604465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.312 [2024-07-15 13:54:36.604469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17218c0) on tqpair=0x169dec0 00:24:10.312 ===================================================== 00:24:10.312 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:10.312 ===================================================== 00:24:10.312 Controller Capabilities/Features 00:24:10.312 ================================ 00:24:10.312 Vendor ID: 8086 00:24:10.312 Subsystem Vendor ID: 8086 00:24:10.312 Serial Number: SPDK00000000000001 00:24:10.312 Model Number: SPDK bdev Controller 00:24:10.312 Firmware Version: 24.09 00:24:10.312 Recommended Arb Burst: 6 00:24:10.312 IEEE OUI Identifier: e4 d2 5c 00:24:10.312 Multi-path I/O 00:24:10.312 May have multiple subsystem ports: Yes 00:24:10.312 May have multiple controllers: Yes 00:24:10.312 Associated with SR-IOV VF: No 00:24:10.312 Max Data Transfer Size: 131072 00:24:10.312 Max Number of Namespaces: 32 00:24:10.312 Max Number of I/O Queues: 127 00:24:10.312 NVMe Specification Version (VS): 1.3 00:24:10.312 NVMe Specification Version (Identify): 1.3 00:24:10.312 Maximum Queue Entries: 128 00:24:10.312 Contiguous Queues Required: Yes 00:24:10.312 Arbitration Mechanisms Supported 00:24:10.312 Weighted Round Robin: Not Supported 00:24:10.312 Vendor Specific: Not Supported 00:24:10.312 Reset Timeout: 15000 ms 00:24:10.312 Doorbell Stride: 4 bytes 00:24:10.312 NVM Subsystem Reset: Not Supported 00:24:10.312 Command Sets Supported 00:24:10.312 NVM Command Set: Supported 00:24:10.312 Boot Partition: Not Supported 00:24:10.312 Memory Page Size Minimum: 4096 bytes 00:24:10.312 Memory Page Size Maximum: 4096 bytes 00:24:10.312 Persistent Memory Region: Not Supported 00:24:10.312 Optional Asynchronous Events Supported 00:24:10.312 Namespace Attribute Notices: Supported 00:24:10.312 Firmware Activation Notices: Not Supported 00:24:10.312 ANA Change Notices: Not Supported 00:24:10.312 PLE Aggregate Log Change Notices: Not Supported 00:24:10.312 LBA Status Info Alert Notices: Not Supported 00:24:10.312 EGE Aggregate Log Change Notices: Not Supported 00:24:10.312 Normal NVM Subsystem Shutdown event: Not Supported 00:24:10.312 Zone Descriptor Change Notices: Not Supported 00:24:10.312 Discovery Log Change Notices: Not Supported 00:24:10.312 Controller Attributes 00:24:10.312 128-bit Host Identifier: Supported 00:24:10.312 Non-Operational Permissive Mode: Not Supported 00:24:10.312 NVM Sets: Not Supported 00:24:10.312 Read Recovery Levels: Not Supported 00:24:10.312 Endurance Groups: Not Supported 00:24:10.312 Predictable Latency Mode: Not Supported 00:24:10.312 Traffic Based Keep ALive: Not Supported 00:24:10.312 Namespace Granularity: Not Supported 00:24:10.312 SQ Associations: Not Supported 00:24:10.312 UUID List: Not Supported 00:24:10.312 Multi-Domain Subsystem: Not Supported 00:24:10.312 Fixed Capacity Management: Not Supported 00:24:10.312 Variable Capacity Management: Not Supported 00:24:10.312 Delete Endurance Group: Not Supported 00:24:10.312 Delete NVM Set: Not Supported 00:24:10.312 Extended LBA Formats Supported: Not Supported 00:24:10.312 Flexible Data Placement Supported: Not Supported 00:24:10.312 00:24:10.312 Controller Memory Buffer Support 00:24:10.312 ================================ 00:24:10.312 Supported: No 00:24:10.312 00:24:10.312 Persistent Memory Region Support 00:24:10.312 ================================ 00:24:10.312 Supported: No 00:24:10.312 00:24:10.312 Admin Command Set Attributes 00:24:10.312 ============================ 00:24:10.312 Security Send/Receive: Not Supported 00:24:10.312 Format NVM: Not Supported 00:24:10.312 Firmware Activate/Download: Not Supported 00:24:10.312 Namespace Management: Not Supported 00:24:10.312 Device Self-Test: Not Supported 00:24:10.312 Directives: Not Supported 00:24:10.312 NVMe-MI: Not Supported 00:24:10.312 Virtualization Management: Not Supported 00:24:10.312 Doorbell Buffer Config: Not Supported 00:24:10.312 Get LBA Status Capability: Not Supported 00:24:10.312 Command & Feature Lockdown Capability: Not Supported 00:24:10.312 Abort Command Limit: 4 00:24:10.312 Async Event Request Limit: 4 00:24:10.312 Number of Firmware Slots: N/A 00:24:10.312 Firmware Slot 1 Read-Only: N/A 00:24:10.312 Firmware Activation Without Reset: N/A 00:24:10.312 Multiple Update Detection Support: N/A 00:24:10.312 Firmware Update Granularity: No Information Provided 00:24:10.312 Per-Namespace SMART Log: No 00:24:10.312 Asymmetric Namespace Access Log Page: Not Supported 00:24:10.312 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:10.312 Command Effects Log Page: Supported 00:24:10.312 Get Log Page Extended Data: Supported 00:24:10.312 Telemetry Log Pages: Not Supported 00:24:10.312 Persistent Event Log Pages: Not Supported 00:24:10.312 Supported Log Pages Log Page: May Support 00:24:10.312 Commands Supported & Effects Log Page: Not Supported 00:24:10.312 Feature Identifiers & Effects Log Page:May Support 00:24:10.312 NVMe-MI Commands & Effects Log Page: May Support 00:24:10.312 Data Area 4 for Telemetry Log: Not Supported 00:24:10.312 Error Log Page Entries Supported: 128 00:24:10.312 Keep Alive: Supported 00:24:10.312 Keep Alive Granularity: 10000 ms 00:24:10.312 00:24:10.312 NVM Command Set Attributes 00:24:10.312 ========================== 00:24:10.312 Submission Queue Entry Size 00:24:10.312 Max: 64 00:24:10.312 Min: 64 00:24:10.312 Completion Queue Entry Size 00:24:10.312 Max: 16 00:24:10.312 Min: 16 00:24:10.312 Number of Namespaces: 32 00:24:10.312 Compare Command: Supported 00:24:10.313 Write Uncorrectable Command: Not Supported 00:24:10.313 Dataset Management Command: Supported 00:24:10.313 Write Zeroes Command: Supported 00:24:10.313 Set Features Save Field: Not Supported 00:24:10.313 Reservations: Supported 00:24:10.313 Timestamp: Not Supported 00:24:10.313 Copy: Supported 00:24:10.313 Volatile Write Cache: Present 00:24:10.313 Atomic Write Unit (Normal): 1 00:24:10.313 Atomic Write Unit (PFail): 1 00:24:10.313 Atomic Compare & Write Unit: 1 00:24:10.313 Fused Compare & Write: Supported 00:24:10.313 Scatter-Gather List 00:24:10.313 SGL Command Set: Supported 00:24:10.313 SGL Keyed: Supported 00:24:10.313 SGL Bit Bucket Descriptor: Not Supported 00:24:10.313 SGL Metadata Pointer: Not Supported 00:24:10.313 Oversized SGL: Not Supported 00:24:10.313 SGL Metadata Address: Not Supported 00:24:10.313 SGL Offset: Supported 00:24:10.313 Transport SGL Data Block: Not Supported 00:24:10.313 Replay Protected Memory Block: Not Supported 00:24:10.313 00:24:10.313 Firmware Slot Information 00:24:10.313 ========================= 00:24:10.313 Active slot: 1 00:24:10.313 Slot 1 Firmware Revision: 24.09 00:24:10.313 00:24:10.313 00:24:10.313 Commands Supported and Effects 00:24:10.313 ============================== 00:24:10.313 Admin Commands 00:24:10.313 -------------- 00:24:10.313 Get Log Page (02h): Supported 00:24:10.313 Identify (06h): Supported 00:24:10.313 Abort (08h): Supported 00:24:10.313 Set Features (09h): Supported 00:24:10.313 Get Features (0Ah): Supported 00:24:10.313 Asynchronous Event Request (0Ch): Supported 00:24:10.313 Keep Alive (18h): Supported 00:24:10.313 I/O Commands 00:24:10.313 ------------ 00:24:10.313 Flush (00h): Supported LBA-Change 00:24:10.313 Write (01h): Supported LBA-Change 00:24:10.313 Read (02h): Supported 00:24:10.313 Compare (05h): Supported 00:24:10.313 Write Zeroes (08h): Supported LBA-Change 00:24:10.313 Dataset Management (09h): Supported LBA-Change 00:24:10.313 Copy (19h): Supported LBA-Change 00:24:10.313 00:24:10.313 Error Log 00:24:10.313 ========= 00:24:10.313 00:24:10.313 Arbitration 00:24:10.313 =========== 00:24:10.313 Arbitration Burst: 1 00:24:10.313 00:24:10.313 Power Management 00:24:10.313 ================ 00:24:10.313 Number of Power States: 1 00:24:10.313 Current Power State: Power State #0 00:24:10.313 Power State #0: 00:24:10.313 Max Power: 0.00 W 00:24:10.313 Non-Operational State: Operational 00:24:10.313 Entry Latency: Not Reported 00:24:10.313 Exit Latency: Not Reported 00:24:10.313 Relative Read Throughput: 0 00:24:10.313 Relative Read Latency: 0 00:24:10.313 Relative Write Throughput: 0 00:24:10.313 Relative Write Latency: 0 00:24:10.313 Idle Power: Not Reported 00:24:10.313 Active Power: Not Reported 00:24:10.313 Non-Operational Permissive Mode: Not Supported 00:24:10.313 00:24:10.313 Health Information 00:24:10.313 ================== 00:24:10.313 Critical Warnings: 00:24:10.313 Available Spare Space: OK 00:24:10.313 Temperature: OK 00:24:10.313 Device Reliability: OK 00:24:10.313 Read Only: No 00:24:10.313 Volatile Memory Backup: OK 00:24:10.313 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:10.313 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:10.313 Available Spare: 0% 00:24:10.313 Available Spare Threshold: 0% 00:24:10.313 Life Percentage Used:[2024-07-15 13:54:36.604572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.313 [2024-07-15 13:54:36.604578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x169dec0) 00:24:10.313 [2024-07-15 13:54:36.604586] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.313 [2024-07-15 13:54:36.604599] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17218c0, cid 7, qid 0 00:24:10.313 [2024-07-15 13:54:36.604848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.313 [2024-07-15 13:54:36.604854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.313 [2024-07-15 13:54:36.604858] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.313 [2024-07-15 13:54:36.604861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17218c0) on tqpair=0x169dec0 00:24:10.313 [2024-07-15 13:54:36.604894] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:10.313 [2024-07-15 13:54:36.604904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1720e40) on tqpair=0x169dec0 00:24:10.313 [2024-07-15 13:54:36.604910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.313 [2024-07-15 13:54:36.604916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1720fc0) on tqpair=0x169dec0 00:24:10.313 [2024-07-15 13:54:36.604920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.313 [2024-07-15 13:54:36.604925] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1721140) on tqpair=0x169dec0 00:24:10.313 [2024-07-15 13:54:36.604930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.313 [2024-07-15 13:54:36.604936] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17212c0) on tqpair=0x169dec0 00:24:10.313 [2024-07-15 13:54:36.604941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.313 [2024-07-15 13:54:36.604949] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.313 [2024-07-15 13:54:36.604953] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.313 [2024-07-15 13:54:36.604956] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x169dec0) 00:24:10.313 [2024-07-15 13:54:36.604963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.313 [2024-07-15 13:54:36.604975] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17212c0, cid 3, qid 0 00:24:10.313 [2024-07-15 13:54:36.605177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.313 [2024-07-15 13:54:36.605184] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.313 [2024-07-15 13:54:36.605188] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.313 [2024-07-15 13:54:36.605191] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17212c0) on tqpair=0x169dec0 00:24:10.313 [2024-07-15 13:54:36.605198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.313 [2024-07-15 13:54:36.605202] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.313 [2024-07-15 13:54:36.605205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x169dec0) 00:24:10.313 [2024-07-15 13:54:36.605212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.313 [2024-07-15 13:54:36.605225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17212c0, cid 3, qid 0 00:24:10.313 [2024-07-15 13:54:36.605320] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.313 [2024-07-15 13:54:36.605326] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.313 [2024-07-15 13:54:36.605329] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.313 [2024-07-15 13:54:36.605333] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17212c0) on tqpair=0x169dec0 00:24:10.313 [2024-07-15 13:54:36.605337] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:10.313 [2024-07-15 13:54:36.605342] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:10.313 [2024-07-15 13:54:36.605351] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.313 [2024-07-15 13:54:36.605356] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.313 [2024-07-15 13:54:36.605360] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x169dec0) 00:24:10.313 [2024-07-15 13:54:36.605366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.313 [2024-07-15 13:54:36.605376] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17212c0, cid 3, qid 0 00:24:10.313 [2024-07-15 13:54:36.605622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.313 [2024-07-15 13:54:36.605628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.313 [2024-07-15 13:54:36.605632] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.313 [2024-07-15 13:54:36.605636] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17212c0) on tqpair=0x169dec0 00:24:10.313 [2024-07-15 13:54:36.605645] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.313 [2024-07-15 13:54:36.605649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.313 [2024-07-15 13:54:36.605653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x169dec0) 00:24:10.313 [2024-07-15 13:54:36.605659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.313 [2024-07-15 13:54:36.605671] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17212c0, cid 3, qid 0 00:24:10.313 [2024-07-15 13:54:36.605873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.313 [2024-07-15 13:54:36.605880] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.313 [2024-07-15 13:54:36.605883] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.313 [2024-07-15 13:54:36.605887] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17212c0) on tqpair=0x169dec0 00:24:10.313 [2024-07-15 13:54:36.605897] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.313 [2024-07-15 13:54:36.605901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.313 [2024-07-15 13:54:36.605904] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x169dec0) 00:24:10.313 [2024-07-15 13:54:36.605911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.313 [2024-07-15 13:54:36.605920] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17212c0, cid 3, qid 0 00:24:10.313 [2024-07-15 13:54:36.606097] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.313 [2024-07-15 13:54:36.606104] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.313 [2024-07-15 13:54:36.606107] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.313 [2024-07-15 13:54:36.606111] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17212c0) on tqpair=0x169dec0 00:24:10.313 [2024-07-15 13:54:36.606120] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.313 [2024-07-15 13:54:36.606130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.314 [2024-07-15 13:54:36.606133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x169dec0) 00:24:10.314 [2024-07-15 13:54:36.606140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.314 [2024-07-15 13:54:36.606150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17212c0, cid 3, qid 0 00:24:10.314 [2024-07-15 13:54:36.606377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.314 [2024-07-15 13:54:36.606383] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.314 [2024-07-15 13:54:36.606386] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.314 [2024-07-15 13:54:36.606390] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17212c0) on tqpair=0x169dec0 00:24:10.314 [2024-07-15 13:54:36.606400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.314 [2024-07-15 13:54:36.606403] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.314 [2024-07-15 13:54:36.606407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x169dec0) 00:24:10.314 [2024-07-15 13:54:36.606413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.314 [2024-07-15 13:54:36.606423] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17212c0, cid 3, qid 0 00:24:10.314 [2024-07-15 13:54:36.606630] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.314 [2024-07-15 13:54:36.606637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.314 [2024-07-15 13:54:36.606640] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.314 [2024-07-15 13:54:36.606644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17212c0) on tqpair=0x169dec0 00:24:10.314 [2024-07-15 13:54:36.606653] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.314 [2024-07-15 13:54:36.606657] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.314 [2024-07-15 13:54:36.606660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x169dec0) 00:24:10.314 [2024-07-15 13:54:36.606667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.314 [2024-07-15 13:54:36.606676] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17212c0, cid 3, qid 0 00:24:10.314 [2024-07-15 13:54:36.606881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.314 [2024-07-15 13:54:36.606887] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.314 [2024-07-15 13:54:36.606891] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.314 [2024-07-15 13:54:36.606894] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17212c0) on tqpair=0x169dec0 00:24:10.314 [2024-07-15 13:54:36.606904] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.314 [2024-07-15 13:54:36.606908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.314 [2024-07-15 13:54:36.606911] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x169dec0) 00:24:10.314 [2024-07-15 13:54:36.606918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.314 [2024-07-15 13:54:36.606927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17212c0, cid 3, qid 0 00:24:10.314 [2024-07-15 13:54:36.611129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.314 [2024-07-15 13:54:36.611138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.314 [2024-07-15 13:54:36.611142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.314 [2024-07-15 13:54:36.611146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17212c0) on tqpair=0x169dec0 00:24:10.314 [2024-07-15 13:54:36.611156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.314 [2024-07-15 13:54:36.611160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.314 [2024-07-15 13:54:36.611164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x169dec0) 00:24:10.314 [2024-07-15 13:54:36.611170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.314 [2024-07-15 13:54:36.611182] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17212c0, cid 3, qid 0 00:24:10.314 [2024-07-15 13:54:36.611392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.314 [2024-07-15 13:54:36.611398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.314 [2024-07-15 13:54:36.611402] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.314 [2024-07-15 13:54:36.611405] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17212c0) on tqpair=0x169dec0 00:24:10.314 [2024-07-15 13:54:36.611413] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:10.314 0% 00:24:10.314 Data Units Read: 0 00:24:10.314 Data Units Written: 0 00:24:10.314 Host Read Commands: 0 00:24:10.314 Host Write Commands: 0 00:24:10.314 Controller Busy Time: 0 minutes 00:24:10.314 Power Cycles: 0 00:24:10.314 Power On Hours: 0 hours 00:24:10.314 Unsafe Shutdowns: 0 00:24:10.314 Unrecoverable Media Errors: 0 00:24:10.314 Lifetime Error Log Entries: 0 00:24:10.314 Warning Temperature Time: 0 minutes 00:24:10.314 Critical Temperature Time: 0 minutes 00:24:10.314 00:24:10.314 Number of Queues 00:24:10.314 ================ 00:24:10.314 Number of I/O Submission Queues: 127 00:24:10.314 Number of I/O Completion Queues: 127 00:24:10.314 00:24:10.314 Active Namespaces 00:24:10.314 ================= 00:24:10.314 Namespace ID:1 00:24:10.314 Error Recovery Timeout: Unlimited 00:24:10.314 Command Set Identifier: NVM (00h) 00:24:10.314 Deallocate: Supported 00:24:10.314 Deallocated/Unwritten Error: Not Supported 00:24:10.314 Deallocated Read Value: Unknown 00:24:10.314 Deallocate in Write Zeroes: Not Supported 00:24:10.314 Deallocated Guard Field: 0xFFFF 00:24:10.314 Flush: Supported 00:24:10.314 Reservation: Supported 00:24:10.314 Namespace Sharing Capabilities: Multiple Controllers 00:24:10.314 Size (in LBAs): 131072 (0GiB) 00:24:10.314 Capacity (in LBAs): 131072 (0GiB) 00:24:10.314 Utilization (in LBAs): 131072 (0GiB) 00:24:10.314 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:10.314 EUI64: ABCDEF0123456789 00:24:10.314 UUID: b4449826-0e97-494b-a637-3cd8999a1596 00:24:10.314 Thin Provisioning: Not Supported 00:24:10.314 Per-NS Atomic Units: Yes 00:24:10.314 Atomic Boundary Size (Normal): 0 00:24:10.314 Atomic Boundary Size (PFail): 0 00:24:10.314 Atomic Boundary Offset: 0 00:24:10.314 Maximum Single Source Range Length: 65535 00:24:10.314 Maximum Copy Length: 65535 00:24:10.314 Maximum Source Range Count: 1 00:24:10.314 NGUID/EUI64 Never Reused: No 00:24:10.314 Namespace Write Protected: No 00:24:10.314 Number of LBA Formats: 1 00:24:10.314 Current LBA Format: LBA Format #00 00:24:10.314 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:10.314 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:10.314 rmmod nvme_tcp 00:24:10.314 rmmod nvme_fabrics 00:24:10.314 rmmod nvme_keyring 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1184657 ']' 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1184657 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1184657 ']' 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1184657 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1184657 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1184657' 00:24:10.314 killing process with pid 1184657 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1184657 00:24:10.314 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1184657 00:24:10.591 13:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:10.591 13:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:10.591 13:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:10.591 13:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:10.591 13:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:10.591 13:54:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.591 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.591 13:54:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.504 13:54:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:12.504 00:24:12.504 real 0m11.125s 00:24:12.504 user 0m8.197s 00:24:12.504 sys 0m5.742s 00:24:12.504 13:54:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.504 13:54:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.504 ************************************ 00:24:12.504 END TEST nvmf_identify 00:24:12.504 ************************************ 00:24:12.504 13:54:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:12.504 13:54:39 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:12.504 13:54:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:12.504 13:54:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.504 13:54:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:12.765 ************************************ 00:24:12.765 START TEST nvmf_perf 00:24:12.765 ************************************ 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:12.765 * Looking for test storage... 00:24:12.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:12.765 13:54:39 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:12.766 13:54:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:19.352 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:19.352 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.352 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:19.352 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:19.613 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.613 13:54:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.613 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.613 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.613 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:19.613 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:19.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:24:19.874 00:24:19.874 --- 10.0.0.2 ping statistics --- 00:24:19.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.874 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:24:19.874 00:24:19.874 --- 10.0.0.1 ping statistics --- 00:24:19.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.874 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1189007 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1189007 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1189007 ']' 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.874 13:54:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:19.874 [2024-07-15 13:54:46.292488] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:19.874 [2024-07-15 13:54:46.292546] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.874 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.874 [2024-07-15 13:54:46.362330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:20.134 [2024-07-15 13:54:46.428023] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.134 [2024-07-15 13:54:46.428060] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.134 [2024-07-15 13:54:46.428067] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.134 [2024-07-15 13:54:46.428074] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.134 [2024-07-15 13:54:46.428080] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.134 [2024-07-15 13:54:46.428215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.134 [2024-07-15 13:54:46.428409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.134 [2024-07-15 13:54:46.428564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.134 [2024-07-15 13:54:46.428565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.717 13:54:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:20.717 13:54:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:24:20.717 13:54:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:20.717 13:54:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:20.717 13:54:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:20.717 13:54:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.717 13:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:20.717 13:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:21.289 13:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:21.289 13:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:21.289 13:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:21.289 13:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:21.550 13:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:21.550 13:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:21.550 13:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:21.550 13:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:21.550 13:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:21.550 [2024-07-15 13:54:48.065372] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.810 13:54:48 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:21.810 13:54:48 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:21.810 13:54:48 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:22.071 13:54:48 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:22.071 13:54:48 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:22.331 13:54:48 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:22.331 [2024-07-15 13:54:48.747843] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.331 13:54:48 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:22.592 13:54:48 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:22.592 13:54:48 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:22.592 13:54:48 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:22.592 13:54:48 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:23.978 Initializing NVMe Controllers 00:24:23.978 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:23.978 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:23.978 Initialization complete. Launching workers. 00:24:23.978 ======================================================== 00:24:23.978 Latency(us) 00:24:23.978 Device Information : IOPS MiB/s Average min max 00:24:23.978 PCIE (0000:65:00.0) NSID 1 from core 0: 79392.08 310.13 402.36 13.28 4928.32 00:24:23.978 ======================================================== 00:24:23.978 Total : 79392.08 310.13 402.36 13.28 4928.32 00:24:23.978 00:24:23.978 13:54:50 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.978 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.919 Initializing NVMe Controllers 00:24:24.919 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:24.919 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:24.919 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:24.919 Initialization complete. Launching workers. 00:24:24.919 ======================================================== 00:24:24.919 Latency(us) 00:24:24.919 Device Information : IOPS MiB/s Average min max 00:24:24.919 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.87 0.30 13164.55 400.93 46590.10 00:24:24.919 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 49.92 0.19 20032.24 7956.27 50879.76 00:24:24.919 ======================================================== 00:24:24.919 Total : 127.79 0.50 15847.24 400.93 50879.76 00:24:24.919 00:24:24.919 13:54:51 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:25.179 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.564 Initializing NVMe Controllers 00:24:26.564 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:26.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:26.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:26.564 Initialization complete. Launching workers. 00:24:26.564 ======================================================== 00:24:26.564 Latency(us) 00:24:26.564 Device Information : IOPS MiB/s Average min max 00:24:26.564 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9984.99 39.00 3205.33 558.71 7461.30 00:24:26.564 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3975.00 15.53 8094.84 6873.63 15698.22 00:24:26.564 ======================================================== 00:24:26.564 Total : 13959.99 54.53 4597.58 558.71 15698.22 00:24:26.564 00:24:26.564 13:54:52 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:26.564 13:54:52 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:26.564 13:54:52 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:26.564 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.126 Initializing NVMe Controllers 00:24:29.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:29.126 Controller IO queue size 128, less than required. 00:24:29.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:29.126 Controller IO queue size 128, less than required. 00:24:29.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:29.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:29.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:29.126 Initialization complete. Launching workers. 00:24:29.126 ======================================================== 00:24:29.126 Latency(us) 00:24:29.126 Device Information : IOPS MiB/s Average min max 00:24:29.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 892.50 223.12 149305.73 72337.74 255306.71 00:24:29.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 614.50 153.62 219744.87 77314.51 323204.52 00:24:29.126 ======================================================== 00:24:29.126 Total : 1507.00 376.75 178028.26 72337.74 323204.52 00:24:29.126 00:24:29.126 13:54:55 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:29.126 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.387 No valid NVMe controllers or AIO or URING devices found 00:24:29.387 Initializing NVMe Controllers 00:24:29.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:29.387 Controller IO queue size 128, less than required. 00:24:29.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:29.387 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:29.387 Controller IO queue size 128, less than required. 00:24:29.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:29.387 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:29.387 WARNING: Some requested NVMe devices were skipped 00:24:29.387 13:54:55 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:29.387 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.956 Initializing NVMe Controllers 00:24:31.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:31.956 Controller IO queue size 128, less than required. 00:24:31.956 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.956 Controller IO queue size 128, less than required. 00:24:31.956 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:31.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:31.956 Initialization complete. Launching workers. 00:24:31.956 00:24:31.956 ==================== 00:24:31.956 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:31.956 TCP transport: 00:24:31.956 polls: 36604 00:24:31.956 idle_polls: 12339 00:24:31.956 sock_completions: 24265 00:24:31.956 nvme_completions: 3811 00:24:31.956 submitted_requests: 5768 00:24:31.956 queued_requests: 1 00:24:31.956 00:24:31.956 ==================== 00:24:31.956 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:31.956 TCP transport: 00:24:31.956 polls: 35270 00:24:31.956 idle_polls: 12438 00:24:31.956 sock_completions: 22832 00:24:31.956 nvme_completions: 3785 00:24:31.956 submitted_requests: 5758 00:24:31.956 queued_requests: 1 00:24:31.956 ======================================================== 00:24:31.956 Latency(us) 00:24:31.956 Device Information : IOPS MiB/s Average min max 00:24:31.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 952.49 238.12 138867.34 68938.12 236322.35 00:24:31.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 945.99 236.50 137039.39 69479.05 211674.39 00:24:31.956 ======================================================== 00:24:31.956 Total : 1898.49 474.62 137956.50 68938.12 236322.35 00:24:31.956 00:24:31.956 13:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:31.956 13:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:32.217 rmmod nvme_tcp 00:24:32.217 rmmod nvme_fabrics 00:24:32.217 rmmod nvme_keyring 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1189007 ']' 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1189007 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1189007 ']' 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1189007 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1189007 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1189007' 00:24:32.217 killing process with pid 1189007 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1189007 00:24:32.217 13:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1189007 00:24:34.126 13:55:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:34.126 13:55:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:34.126 13:55:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:34.126 13:55:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:34.126 13:55:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:34.126 13:55:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.126 13:55:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.126 13:55:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.673 13:55:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:36.673 00:24:36.673 real 0m23.641s 00:24:36.673 user 0m58.231s 00:24:36.673 sys 0m7.630s 00:24:36.673 13:55:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:36.673 13:55:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:36.673 ************************************ 00:24:36.673 END TEST nvmf_perf 00:24:36.673 ************************************ 00:24:36.673 13:55:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:36.673 13:55:02 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:36.673 13:55:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:36.673 13:55:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:36.673 13:55:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:36.673 ************************************ 00:24:36.673 START TEST nvmf_fio_host 00:24:36.673 ************************************ 00:24:36.673 13:55:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:36.673 * Looking for test storage... 00:24:36.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:36.673 13:55:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.673 13:55:02 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.673 13:55:02 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.673 13:55:02 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.673 13:55:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.673 13:55:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.673 13:55:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.673 13:55:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:36.674 13:55:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.819 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:44.820 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:44.820 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:44.820 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:44.820 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.820 13:55:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:44.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:24:44.820 00:24:44.820 --- 10.0.0.2 ping statistics --- 00:24:44.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.820 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:44.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:24:44.820 00:24:44.820 --- 10.0.0.1 ping statistics --- 00:24:44.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.820 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1195995 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1195995 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1195995 ']' 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:44.820 13:55:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.820 [2024-07-15 13:55:10.296260] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:44.820 [2024-07-15 13:55:10.296332] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.820 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.820 [2024-07-15 13:55:10.367923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:44.820 [2024-07-15 13:55:10.442495] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.820 [2024-07-15 13:55:10.442533] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.820 [2024-07-15 13:55:10.442540] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:44.820 [2024-07-15 13:55:10.442547] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:44.820 [2024-07-15 13:55:10.442552] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.820 [2024-07-15 13:55:10.442689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.820 [2024-07-15 13:55:10.442814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.820 [2024-07-15 13:55:10.442970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.820 [2024-07-15 13:55:10.442971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:44.820 13:55:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.820 13:55:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:24:44.821 13:55:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:44.821 [2024-07-15 13:55:11.225071] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.821 13:55:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:44.821 13:55:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:44.821 13:55:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.821 13:55:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:45.081 Malloc1 00:24:45.081 13:55:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:45.341 13:55:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:45.341 13:55:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:45.601 [2024-07-15 13:55:11.959354] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.602 13:55:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:45.887 13:55:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:46.162 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:46.162 fio-3.35 00:24:46.162 Starting 1 thread 00:24:46.162 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.710 00:24:48.710 test: (groupid=0, jobs=1): err= 0: pid=1196594: Mon Jul 15 13:55:14 2024 00:24:48.710 read: IOPS=9753, BW=38.1MiB/s (40.0MB/s)(76.4MiB/2006msec) 00:24:48.710 slat (usec): min=2, max=277, avg= 2.22, stdev= 2.80 00:24:48.710 clat (usec): min=3606, max=12077, avg=7256.54, stdev=546.73 00:24:48.710 lat (usec): min=3641, max=12079, avg=7258.76, stdev=546.61 00:24:48.710 clat percentiles (usec): 00:24:48.710 | 1.00th=[ 5932], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6849], 00:24:48.710 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:24:48.710 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7898], 95.00th=[ 8094], 00:24:48.710 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[11207], 99.95th=[11731], 00:24:48.710 | 99.99th=[11994] 00:24:48.710 bw ( KiB/s): min=38288, max=39448, per=99.92%, avg=38982.00, stdev=508.59, samples=4 00:24:48.710 iops : min= 9572, max= 9862, avg=9745.50, stdev=127.15, samples=4 00:24:48.710 write: IOPS=9760, BW=38.1MiB/s (40.0MB/s)(76.5MiB/2006msec); 0 zone resets 00:24:48.710 slat (usec): min=2, max=276, avg= 2.32, stdev= 2.17 00:24:48.710 clat (usec): min=2903, max=11221, avg=5811.55, stdev=449.97 00:24:48.710 lat (usec): min=2920, max=11223, avg=5813.88, stdev=449.89 00:24:48.710 clat percentiles (usec): 00:24:48.710 | 1.00th=[ 4752], 5.00th=[ 5080], 10.00th=[ 5276], 20.00th=[ 5473], 00:24:48.710 | 30.00th=[ 5604], 40.00th=[ 5735], 50.00th=[ 5800], 60.00th=[ 5932], 00:24:48.710 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6456], 00:24:48.710 | 99.00th=[ 6783], 99.50th=[ 6915], 99.90th=[ 8848], 99.95th=[ 9634], 00:24:48.710 | 99.99th=[11207] 00:24:48.710 bw ( KiB/s): min=38784, max=39552, per=100.00%, avg=39056.00, stdev=348.10, samples=4 00:24:48.710 iops : min= 9696, max= 9888, avg=9764.00, stdev=87.02, samples=4 00:24:48.710 lat (msec) : 4=0.06%, 10=99.84%, 20=0.10% 00:24:48.710 cpu : usr=68.18%, sys=26.98%, ctx=41, majf=0, minf=7 00:24:48.710 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:48.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:48.710 issued rwts: total=19566,19580,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.710 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:48.710 00:24:48.710 Run status group 0 (all jobs): 00:24:48.710 READ: bw=38.1MiB/s (40.0MB/s), 38.1MiB/s-38.1MiB/s (40.0MB/s-40.0MB/s), io=76.4MiB (80.1MB), run=2006-2006msec 00:24:48.710 WRITE: bw=38.1MiB/s (40.0MB/s), 38.1MiB/s-38.1MiB/s (40.0MB/s-40.0MB/s), io=76.5MiB (80.2MB), run=2006-2006msec 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:48.710 13:55:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:48.969 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:48.969 fio-3.35 00:24:48.969 Starting 1 thread 00:24:48.969 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.512 00:24:51.512 test: (groupid=0, jobs=1): err= 0: pid=1197353: Mon Jul 15 13:55:17 2024 00:24:51.512 read: IOPS=8821, BW=138MiB/s (145MB/s)(277MiB/2008msec) 00:24:51.512 slat (usec): min=3, max=115, avg= 3.60, stdev= 1.45 00:24:51.512 clat (usec): min=671, max=19809, avg=8841.26, stdev=2021.88 00:24:51.512 lat (usec): min=680, max=19813, avg=8844.86, stdev=2022.03 00:24:51.512 clat percentiles (usec): 00:24:51.512 | 1.00th=[ 4686], 5.00th=[ 5735], 10.00th=[ 6259], 20.00th=[ 6980], 00:24:51.512 | 30.00th=[ 7570], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9372], 00:24:51.512 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[11469], 95.00th=[12125], 00:24:51.512 | 99.00th=[13829], 99.50th=[14484], 99.90th=[15664], 99.95th=[15795], 00:24:51.512 | 99.99th=[17171] 00:24:51.512 bw ( KiB/s): min=60576, max=81312, per=50.58%, avg=71392.00, stdev=10547.65, samples=4 00:24:51.512 iops : min= 3786, max= 5082, avg=4462.00, stdev=659.23, samples=4 00:24:51.512 write: IOPS=5251, BW=82.0MiB/s (86.0MB/s)(145MiB/1768msec); 0 zone resets 00:24:51.512 slat (usec): min=39, max=328, avg=41.11, stdev= 7.55 00:24:51.512 clat (usec): min=3597, max=17526, avg=9735.86, stdev=1624.52 00:24:51.512 lat (usec): min=3637, max=17566, avg=9776.97, stdev=1626.28 00:24:51.512 clat percentiles (usec): 00:24:51.512 | 1.00th=[ 6587], 5.00th=[ 7373], 10.00th=[ 7832], 20.00th=[ 8356], 00:24:51.512 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:24:51.512 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11863], 95.00th=[12780], 00:24:51.512 | 99.00th=[14091], 99.50th=[14484], 99.90th=[15664], 99.95th=[16057], 00:24:51.512 | 99.99th=[17433] 00:24:51.512 bw ( KiB/s): min=63264, max=84992, per=88.40%, avg=74272.00, stdev=10841.75, samples=4 00:24:51.512 iops : min= 3954, max= 5312, avg=4642.00, stdev=677.61, samples=4 00:24:51.512 lat (usec) : 750=0.01% 00:24:51.512 lat (msec) : 2=0.01%, 4=0.18%, 10=67.41%, 20=32.40% 00:24:51.512 cpu : usr=82.21%, sys=14.30%, ctx=11, majf=0, minf=22 00:24:51.512 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:51.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:51.512 issued rwts: total=17713,9284,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:51.512 00:24:51.512 Run status group 0 (all jobs): 00:24:51.512 READ: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=277MiB (290MB), run=2008-2008msec 00:24:51.512 WRITE: bw=82.0MiB/s (86.0MB/s), 82.0MiB/s-82.0MiB/s (86.0MB/s-86.0MB/s), io=145MiB (152MB), run=1768-1768msec 00:24:51.512 13:55:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:51.512 13:55:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:51.512 13:55:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:51.512 13:55:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:51.512 13:55:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:51.512 13:55:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:51.512 13:55:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:51.512 13:55:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:51.512 13:55:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:51.512 13:55:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:51.513 13:55:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:51.513 rmmod nvme_tcp 00:24:51.513 rmmod nvme_fabrics 00:24:51.513 rmmod nvme_keyring 00:24:51.513 13:55:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:51.513 13:55:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:51.513 13:55:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:51.513 13:55:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1195995 ']' 00:24:51.513 13:55:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1195995 00:24:51.513 13:55:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1195995 ']' 00:24:51.513 13:55:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1195995 00:24:51.513 13:55:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:24:51.513 13:55:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:51.513 13:55:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1195995 00:24:51.513 13:55:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:51.513 13:55:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:51.513 13:55:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1195995' 00:24:51.513 killing process with pid 1195995 00:24:51.513 13:55:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1195995 00:24:51.513 13:55:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1195995 00:24:51.815 13:55:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:51.815 13:55:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:51.815 13:55:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:51.815 13:55:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:51.815 13:55:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:51.815 13:55:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.815 13:55:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.815 13:55:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.729 13:55:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:53.729 00:24:53.729 real 0m17.364s 00:24:53.729 user 1m5.788s 00:24:53.729 sys 0m7.428s 00:24:53.729 13:55:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:53.729 13:55:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.729 ************************************ 00:24:53.729 END TEST nvmf_fio_host 00:24:53.729 ************************************ 00:24:53.729 13:55:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:53.729 13:55:20 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:53.729 13:55:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:53.729 13:55:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:53.729 13:55:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:53.729 ************************************ 00:24:53.729 START TEST nvmf_failover 00:24:53.729 ************************************ 00:24:53.729 13:55:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:53.990 * Looking for test storage... 00:24:53.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.990 13:55:20 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:53.991 13:55:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:02.127 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:02.127 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:02.127 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:02.127 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:02.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:25:02.127 00:25:02.127 --- 10.0.0.2 ping statistics --- 00:25:02.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.127 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:02.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:25:02.127 00:25:02.127 --- 10.0.0.1 ping statistics --- 00:25:02.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.127 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1201766 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1201766 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1201766 ']' 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:02.127 13:55:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:02.127 [2024-07-15 13:55:27.571576] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:02.127 [2024-07-15 13:55:27.571630] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.127 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.127 [2024-07-15 13:55:27.656773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:02.127 [2024-07-15 13:55:27.742774] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.127 [2024-07-15 13:55:27.742837] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.127 [2024-07-15 13:55:27.742846] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.127 [2024-07-15 13:55:27.742853] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.127 [2024-07-15 13:55:27.742859] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.127 [2024-07-15 13:55:27.743028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.127 [2024-07-15 13:55:27.743237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:02.127 [2024-07-15 13:55:27.743376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.127 13:55:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:02.127 13:55:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:02.127 13:55:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:02.127 13:55:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:02.127 13:55:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:02.127 13:55:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.127 13:55:28 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:02.127 [2024-07-15 13:55:28.529356] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.127 13:55:28 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:02.387 Malloc0 00:25:02.387 13:55:28 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:02.387 13:55:28 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:02.648 13:55:29 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.925 [2024-07-15 13:55:29.197617] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.925 13:55:29 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:02.925 [2024-07-15 13:55:29.358021] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:02.925 13:55:29 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:03.203 [2024-07-15 13:55:29.530573] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:03.203 13:55:29 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1202272 00:25:03.203 13:55:29 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:03.203 13:55:29 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:03.203 13:55:29 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1202272 /var/tmp/bdevperf.sock 00:25:03.203 13:55:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1202272 ']' 00:25:03.203 13:55:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:03.203 13:55:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:03.203 13:55:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:03.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:03.203 13:55:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:03.203 13:55:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:04.143 13:55:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:04.143 13:55:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:04.143 13:55:30 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:04.143 NVMe0n1 00:25:04.143 13:55:30 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:04.714 00:25:04.714 13:55:30 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1202460 00:25:04.714 13:55:30 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:04.714 13:55:30 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:05.655 13:55:31 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.655 [2024-07-15 13:55:32.102724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.655 [2024-07-15 13:55:32.102781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.655 [2024-07-15 13:55:32.102787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.655 [2024-07-15 13:55:32.102792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.655 [2024-07-15 13:55:32.102797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.655 [2024-07-15 13:55:32.102801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.655 [2024-07-15 13:55:32.102806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.655 [2024-07-15 13:55:32.102810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.655 [2024-07-15 13:55:32.102815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.655 [2024-07-15 13:55:32.102819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.655 [2024-07-15 13:55:32.102823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.102995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.656 [2024-07-15 13:55:32.103212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 [2024-07-15 13:55:32.103359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1848c50 is same with the state(5) to be set 00:25:05.657 13:55:32 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:08.968 13:55:35 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.968 00:25:08.968 13:55:35 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:09.229 [2024-07-15 13:55:35.591065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 [2024-07-15 13:55:35.591329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184a370 is same with the state(5) to be set 00:25:09.229 13:55:35 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:12.528 13:55:38 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.528 [2024-07-15 13:55:38.768638] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.528 13:55:38 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:13.470 13:55:39 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:13.470 [2024-07-15 13:55:39.948033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 [2024-07-15 13:55:39.948267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184aa70 is same with the state(5) to be set 00:25:13.470 13:55:39 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1202460 00:25:20.057 0 00:25:20.057 13:55:46 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1202272 00:25:20.057 13:55:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1202272 ']' 00:25:20.057 13:55:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1202272 00:25:20.057 13:55:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:20.057 13:55:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:20.057 13:55:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1202272 00:25:20.057 13:55:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:20.057 13:55:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:20.057 13:55:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1202272' 00:25:20.057 killing process with pid 1202272 00:25:20.057 13:55:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1202272 00:25:20.057 13:55:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1202272 00:25:20.057 13:55:46 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:20.057 [2024-07-15 13:55:29.608232] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:20.057 [2024-07-15 13:55:29.608291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202272 ] 00:25:20.057 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.057 [2024-07-15 13:55:29.667055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.057 [2024-07-15 13:55:29.731661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.057 Running I/O for 15 seconds... 00:25:20.057 [2024-07-15 13:55:32.104570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.104983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.104992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.105001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.105010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.105018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.105027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.105034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.105043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.105050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.105059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.105066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.105075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.105083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.105092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.105099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.105108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.105115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.105130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.105137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.105147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.105154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.105163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.105170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.105180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.105187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.105196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.105203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.105214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.105221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.105230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.105238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.105247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.105254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.105263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.105270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.105280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.058 [2024-07-15 13:55:32.105287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.058 [2024-07-15 13:55:32.105296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.059 [2024-07-15 13:55:32.105951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.059 [2024-07-15 13:55:32.105958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.105967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.060 [2024-07-15 13:55:32.105974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.105983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.060 [2024-07-15 13:55:32.105990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.060 [2024-07-15 13:55:32.106007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.060 [2024-07-15 13:55:32.106023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.060 [2024-07-15 13:55:32.106041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.060 [2024-07-15 13:55:32.106056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.060 [2024-07-15 13:55:32.106073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.060 [2024-07-15 13:55:32.106578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.060 [2024-07-15 13:55:32.106587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.061 [2024-07-15 13:55:32.106594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:32.106603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.061 [2024-07-15 13:55:32.106609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:32.106618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.061 [2024-07-15 13:55:32.106625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:32.106634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.061 [2024-07-15 13:55:32.106642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:32.106650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.061 [2024-07-15 13:55:32.106661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:32.106683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:20.061 [2024-07-15 13:55:32.106691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97216 len:8 PRP1 0x0 PRP2 0x0 00:25:20.061 [2024-07-15 13:55:32.106698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:32.106708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:20.061 [2024-07-15 13:55:32.106714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:20.061 [2024-07-15 13:55:32.106720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97224 len:8 PRP1 0x0 PRP2 0x0 00:25:20.061 [2024-07-15 13:55:32.106728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:32.106735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:20.061 [2024-07-15 13:55:32.106741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:20.061 [2024-07-15 13:55:32.106747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97232 len:8 PRP1 0x0 PRP2 0x0 00:25:20.061 [2024-07-15 13:55:32.106754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:32.106789] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17c9300 was disconnected and freed. reset controller. 00:25:20.061 [2024-07-15 13:55:32.106799] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:20.061 [2024-07-15 13:55:32.106818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.061 [2024-07-15 13:55:32.106827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:32.106835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.061 [2024-07-15 13:55:32.106842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:32.106850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.061 [2024-07-15 13:55:32.106857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:32.106865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.061 [2024-07-15 13:55:32.106871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:32.106879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.061 [2024-07-15 13:55:32.106914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a7ef0 (9): Bad file descriptor 00:25:20.061 [2024-07-15 13:55:32.110478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.061 [2024-07-15 13:55:32.188471] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:20.061 [2024-07-15 13:55:35.592662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.592701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.592722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.592731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.592740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.592747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.592757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.592764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.592774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.592781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.592790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.592797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.592807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.592814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.592824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.592831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.592840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.592849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.592859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.592866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.592875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.592882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.592891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.592898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.592908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.592915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.592924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.592933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.592942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.592949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.592958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.592966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.592975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.592982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.592991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.592998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.593007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.593014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.593023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.593030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.593039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.061 [2024-07-15 13:55:35.593047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.593056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.061 [2024-07-15 13:55:35.593063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.593072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.061 [2024-07-15 13:55:35.593079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.593088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.061 [2024-07-15 13:55:35.593095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.593105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.061 [2024-07-15 13:55:35.593112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.061 [2024-07-15 13:55:35.593121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.061 [2024-07-15 13:55:35.593132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.062 [2024-07-15 13:55:35.593652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.062 [2024-07-15 13:55:35.593659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.063 [2024-07-15 13:55:35.593675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.063 [2024-07-15 13:55:35.593691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.063 [2024-07-15 13:55:35.593707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.063 [2024-07-15 13:55:35.593723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.063 [2024-07-15 13:55:35.593739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.063 [2024-07-15 13:55:35.593755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.063 [2024-07-15 13:55:35.593773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.063 [2024-07-15 13:55:35.593789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.063 [2024-07-15 13:55:35.593806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.063 [2024-07-15 13:55:35.593822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.063 [2024-07-15 13:55:35.593838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.063 [2024-07-15 13:55:35.593855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.063 [2024-07-15 13:55:35.593871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.063 [2024-07-15 13:55:35.593887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.063 [2024-07-15 13:55:35.593904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.063 [2024-07-15 13:55:35.593920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.063 [2024-07-15 13:55:35.593936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.593953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.593970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.593987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.593996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.063 [2024-07-15 13:55:35.594374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.063 [2024-07-15 13:55:35.594384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:35.594815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:20.064 [2024-07-15 13:55:35.594845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:20.064 [2024-07-15 13:55:35.594851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45000 len:8 PRP1 0x0 PRP2 0x0 00:25:20.064 [2024-07-15 13:55:35.594859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594898] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17cb480 was disconnected and freed. reset controller. 00:25:20.064 [2024-07-15 13:55:35.594908] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:20.064 [2024-07-15 13:55:35.594927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.064 [2024-07-15 13:55:35.594936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.064 [2024-07-15 13:55:35.594952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.064 [2024-07-15 13:55:35.594967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.064 [2024-07-15 13:55:35.594982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:35.594989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.064 [2024-07-15 13:55:35.598553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.064 [2024-07-15 13:55:35.598580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a7ef0 (9): Bad file descriptor 00:25:20.064 [2024-07-15 13:55:35.673355] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:20.064 [2024-07-15 13:55:39.950097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:39.950140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:39.950157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:39.950165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:39.950175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:39.950183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:39.950193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:39.950200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:39.950209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:39.950221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:39.950231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:39.950238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:39.950248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.064 [2024-07-15 13:55:39.950255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.064 [2024-07-15 13:55:39.950264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:69376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.065 [2024-07-15 13:55:39.950963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:69456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.065 [2024-07-15 13:55:39.950970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.950981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.066 [2024-07-15 13:55:39.950990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.066 [2024-07-15 13:55:39.951557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.066 [2024-07-15 13:55:39.951573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.066 [2024-07-15 13:55:39.951583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.951990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.951997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.952005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.952012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.952021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.952029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.952037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.952044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.952053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.952060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.952070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:69992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.952077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.952086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.067 [2024-07-15 13:55:39.952092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.952114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:20.067 [2024-07-15 13:55:39.952125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70008 len:8 PRP1 0x0 PRP2 0x0 00:25:20.067 [2024-07-15 13:55:39.952133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.952143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:20.067 [2024-07-15 13:55:39.952149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:20.067 [2024-07-15 13:55:39.952155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70016 len:8 PRP1 0x0 PRP2 0x0 00:25:20.067 [2024-07-15 13:55:39.952161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.952169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:20.067 [2024-07-15 13:55:39.952174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:20.067 [2024-07-15 13:55:39.952180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70024 len:8 PRP1 0x0 PRP2 0x0 00:25:20.067 [2024-07-15 13:55:39.952186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.952193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:20.067 [2024-07-15 13:55:39.952199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:20.067 [2024-07-15 13:55:39.952204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70032 len:8 PRP1 0x0 PRP2 0x0 00:25:20.067 [2024-07-15 13:55:39.952211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.952218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:20.067 [2024-07-15 13:55:39.952224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:20.067 [2024-07-15 13:55:39.952229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70040 len:8 PRP1 0x0 PRP2 0x0 00:25:20.067 [2024-07-15 13:55:39.952236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.952243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:20.067 [2024-07-15 13:55:39.952249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:20.067 [2024-07-15 13:55:39.952255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70048 len:8 PRP1 0x0 PRP2 0x0 00:25:20.067 [2024-07-15 13:55:39.952262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.067 [2024-07-15 13:55:39.952269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:20.067 [2024-07-15 13:55:39.952275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:20.067 [2024-07-15 13:55:39.952280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70056 len:8 PRP1 0x0 PRP2 0x0 00:25:20.068 [2024-07-15 13:55:39.952287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.068 [2024-07-15 13:55:39.952295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:20.068 [2024-07-15 13:55:39.952300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:20.068 [2024-07-15 13:55:39.952306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70064 len:8 PRP1 0x0 PRP2 0x0 00:25:20.068 [2024-07-15 13:55:39.952313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.068 [2024-07-15 13:55:39.952322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:20.068 [2024-07-15 13:55:39.952328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:20.068 [2024-07-15 13:55:39.952334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70072 len:8 PRP1 0x0 PRP2 0x0 00:25:20.068 [2024-07-15 13:55:39.952341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.068 [2024-07-15 13:55:39.952348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:20.068 [2024-07-15 13:55:39.952354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:20.068 [2024-07-15 13:55:39.952359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70080 len:8 PRP1 0x0 PRP2 0x0 00:25:20.068 [2024-07-15 13:55:39.952366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.068 [2024-07-15 13:55:39.952404] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17cc000 was disconnected and freed. reset controller. 00:25:20.068 [2024-07-15 13:55:39.952414] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:20.068 [2024-07-15 13:55:39.952434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.068 [2024-07-15 13:55:39.952442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.068 [2024-07-15 13:55:39.952450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.068 [2024-07-15 13:55:39.952457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.068 [2024-07-15 13:55:39.952465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.068 [2024-07-15 13:55:39.952472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.068 [2024-07-15 13:55:39.952480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.068 [2024-07-15 13:55:39.952486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.068 [2024-07-15 13:55:39.952493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.068 [2024-07-15 13:55:39.952527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a7ef0 (9): Bad file descriptor 00:25:20.068 [2024-07-15 13:55:39.956053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.068 [2024-07-15 13:55:40.031059] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:20.068 00:25:20.068 Latency(us) 00:25:20.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.068 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:20.068 Verification LBA range: start 0x0 length 0x4000 00:25:20.068 NVMe0n1 : 15.01 11389.41 44.49 545.17 0.00 10696.77 1037.65 13544.11 00:25:20.068 =================================================================================================================== 00:25:20.068 Total : 11389.41 44.49 545.17 0.00 10696.77 1037.65 13544.11 00:25:20.068 Received shutdown signal, test time was about 15.000000 seconds 00:25:20.068 00:25:20.068 Latency(us) 00:25:20.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.068 =================================================================================================================== 00:25:20.068 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:20.068 13:55:46 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:20.068 13:55:46 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:20.068 13:55:46 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:20.068 13:55:46 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1205458 00:25:20.068 13:55:46 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1205458 /var/tmp/bdevperf.sock 00:25:20.068 13:55:46 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:20.068 13:55:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1205458 ']' 00:25:20.068 13:55:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:20.068 13:55:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:20.068 13:55:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:20.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:20.068 13:55:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:20.068 13:55:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:20.638 13:55:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:20.638 13:55:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:20.638 13:55:47 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:20.899 [2024-07-15 13:55:47.258292] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:20.899 13:55:47 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:21.160 [2024-07-15 13:55:47.426686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:21.160 13:55:47 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:21.420 NVMe0n1 00:25:21.420 13:55:47 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.003 00:25:22.003 13:55:48 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.003 00:25:22.003 13:55:48 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:22.003 13:55:48 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:22.263 13:55:48 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.523 13:55:48 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:25.823 13:55:51 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:25.823 13:55:51 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:25.823 13:55:51 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1206519 00:25:25.823 13:55:51 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1206519 00:25:25.823 13:55:51 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:26.763 0 00:25:26.763 13:55:53 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:26.763 [2024-07-15 13:55:46.344690] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:26.763 [2024-07-15 13:55:46.344751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1205458 ] 00:25:26.763 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.763 [2024-07-15 13:55:46.403727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.763 [2024-07-15 13:55:46.465324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.763 [2024-07-15 13:55:48.780432] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:26.763 [2024-07-15 13:55:48.780477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.764 [2024-07-15 13:55:48.780489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.764 [2024-07-15 13:55:48.780499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.764 [2024-07-15 13:55:48.780507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.764 [2024-07-15 13:55:48.780515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.764 [2024-07-15 13:55:48.780522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.764 [2024-07-15 13:55:48.780530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.764 [2024-07-15 13:55:48.780537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.764 [2024-07-15 13:55:48.780544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:26.764 [2024-07-15 13:55:48.780570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:26.764 [2024-07-15 13:55:48.780583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf5ef0 (9): Bad file descriptor 00:25:26.764 [2024-07-15 13:55:48.791398] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:26.764 Running I/O for 1 seconds... 00:25:26.764 00:25:26.764 Latency(us) 00:25:26.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.764 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:26.764 Verification LBA range: start 0x0 length 0x4000 00:25:26.764 NVMe0n1 : 1.01 11612.74 45.36 0.00 0.00 10971.16 2088.96 11741.87 00:25:26.764 =================================================================================================================== 00:25:26.764 Total : 11612.74 45.36 0.00 0.00 10971.16 2088.96 11741.87 00:25:26.764 13:55:53 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:26.764 13:55:53 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:26.764 13:55:53 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:27.024 13:55:53 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:27.024 13:55:53 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:27.284 13:55:53 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:27.284 13:55:53 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:30.643 13:55:56 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:30.643 13:55:56 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:30.643 13:55:56 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1205458 00:25:30.643 13:55:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1205458 ']' 00:25:30.643 13:55:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1205458 00:25:30.643 13:55:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:30.643 13:55:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:30.643 13:55:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1205458 00:25:30.643 13:55:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:30.643 13:55:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:30.643 13:55:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1205458' 00:25:30.643 killing process with pid 1205458 00:25:30.643 13:55:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1205458 00:25:30.643 13:55:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1205458 00:25:30.643 13:55:57 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:30.643 13:55:57 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:30.903 rmmod nvme_tcp 00:25:30.903 rmmod nvme_fabrics 00:25:30.903 rmmod nvme_keyring 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1201766 ']' 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1201766 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1201766 ']' 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1201766 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1201766 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1201766' 00:25:30.903 killing process with pid 1201766 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1201766 00:25:30.903 13:55:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1201766 00:25:31.164 13:55:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:31.164 13:55:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:31.164 13:55:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:31.164 13:55:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:31.164 13:55:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:31.164 13:55:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.164 13:55:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:31.164 13:55:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.081 13:55:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:33.342 00:25:33.342 real 0m39.370s 00:25:33.342 user 2m1.905s 00:25:33.342 sys 0m7.935s 00:25:33.342 13:55:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:33.342 13:55:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:33.342 ************************************ 00:25:33.342 END TEST nvmf_failover 00:25:33.342 ************************************ 00:25:33.342 13:55:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:33.342 13:55:59 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:33.342 13:55:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:33.342 13:55:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:33.342 13:55:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:33.342 ************************************ 00:25:33.342 START TEST nvmf_host_discovery 00:25:33.342 ************************************ 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:33.342 * Looking for test storage... 00:25:33.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.342 13:55:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:33.343 13:55:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:41.524 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:41.524 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:41.524 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:41.524 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:41.524 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:41.525 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.525 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:41.525 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:41.525 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:41.525 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.525 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.525 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.525 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:41.525 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.525 13:56:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:41.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.505 ms 00:25:41.525 00:25:41.525 --- 10.0.0.2 ping statistics --- 00:25:41.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.525 rtt min/avg/max/mdev = 0.505/0.505/0.505/0.000 ms 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:25:41.525 00:25:41.525 --- 10.0.0.1 ping statistics --- 00:25:41.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.525 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1211809 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1211809 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1211809 ']' 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.525 [2024-07-15 13:56:07.150204] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:41.525 [2024-07-15 13:56:07.150267] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.525 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.525 [2024-07-15 13:56:07.238019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.525 [2024-07-15 13:56:07.330041] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.525 [2024-07-15 13:56:07.330099] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.525 [2024-07-15 13:56:07.330107] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:41.525 [2024-07-15 13:56:07.330114] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:41.525 [2024-07-15 13:56:07.330135] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.525 [2024-07-15 13:56:07.330160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.525 [2024-07-15 13:56:07.981662] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.525 [2024-07-15 13:56:07.993854] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.525 13:56:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.525 null0 00:25:41.525 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.525 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:41.525 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.525 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.525 null1 00:25:41.525 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.525 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:41.525 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.525 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.525 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.525 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1211922 00:25:41.525 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1211922 /tmp/host.sock 00:25:41.525 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:41.525 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1211922 ']' 00:25:41.525 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:41.525 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.525 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:41.525 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:41.525 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.525 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.786 [2024-07-15 13:56:08.089450] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:41.786 [2024-07-15 13:56:08.089521] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1211922 ] 00:25:41.786 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.786 [2024-07-15 13:56:08.153264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.786 [2024-07-15 13:56:08.227544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.356 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:42.356 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:42.356 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:42.356 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:42.356 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.356 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.671 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.671 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:42.671 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.671 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.671 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.671 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:42.671 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:42.671 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.671 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.671 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.671 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.671 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.671 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.671 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.671 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:42.672 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:42.672 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.672 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.672 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.672 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.672 13:56:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.672 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.672 13:56:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.672 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.934 [2024-07-15 13:56:09.240953] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.934 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.195 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:25:43.195 13:56:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:43.455 [2024-07-15 13:56:09.941353] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:43.455 [2024-07-15 13:56:09.941375] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:43.455 [2024-07-15 13:56:09.941388] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:43.715 [2024-07-15 13:56:10.029691] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:43.975 [2024-07-15 13:56:10.256085] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:43.975 [2024-07-15 13:56:10.256114] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:43.975 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:43.975 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:43.975 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:43.975 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.975 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.975 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.975 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:43.975 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.975 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:43.975 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:44.237 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.238 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.500 [2024-07-15 13:56:10.784963] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:44.500 [2024-07-15 13:56:10.785418] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:44.500 [2024-07-15 13:56:10.785446] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.500 [2024-07-15 13:56:10.874133] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:44.500 13:56:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:44.761 [2024-07-15 13:56:11.182741] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:44.761 [2024-07-15 13:56:11.182761] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:44.761 [2024-07-15 13:56:11.182766] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.707 13:56:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.707 [2024-07-15 13:56:12.049069] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:45.707 [2024-07-15 13:56:12.049089] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:45.707 [2024-07-15 13:56:12.057666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.707 [2024-07-15 13:56:12.057687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.707 [2024-07-15 13:56:12.057697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.707 [2024-07-15 13:56:12.057705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.707 [2024-07-15 13:56:12.057713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.707 [2024-07-15 13:56:12.057720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.707 [2024-07-15 13:56:12.057728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.707 [2024-07-15 13:56:12.057735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.707 [2024-07-15 13:56:12.057742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4d9b0 is same with the state(5) to be set 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.707 [2024-07-15 13:56:12.067680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4d9b0 (9): Bad file descriptor 00:25:45.707 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.707 [2024-07-15 13:56:12.077718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.707 [2024-07-15 13:56:12.078028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.707 [2024-07-15 13:56:12.078044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4d9b0 with addr=10.0.0.2, port=4420 00:25:45.707 [2024-07-15 13:56:12.078052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4d9b0 is same with the state(5) to be set 00:25:45.707 [2024-07-15 13:56:12.078064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4d9b0 (9): Bad file descriptor 00:25:45.707 [2024-07-15 13:56:12.078079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.707 [2024-07-15 13:56:12.078086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.707 [2024-07-15 13:56:12.078094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.707 [2024-07-15 13:56:12.078105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.707 [2024-07-15 13:56:12.087779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.707 [2024-07-15 13:56:12.088335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.707 [2024-07-15 13:56:12.088373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4d9b0 with addr=10.0.0.2, port=4420 00:25:45.707 [2024-07-15 13:56:12.088386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4d9b0 is same with the state(5) to be set 00:25:45.707 [2024-07-15 13:56:12.088405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4d9b0 (9): Bad file descriptor 00:25:45.707 [2024-07-15 13:56:12.088431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.707 [2024-07-15 13:56:12.088439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.707 [2024-07-15 13:56:12.088447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.707 [2024-07-15 13:56:12.088462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.707 [2024-07-15 13:56:12.097835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.707 [2024-07-15 13:56:12.098366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.707 [2024-07-15 13:56:12.098403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4d9b0 with addr=10.0.0.2, port=4420 00:25:45.707 [2024-07-15 13:56:12.098415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4d9b0 is same with the state(5) to be set 00:25:45.707 [2024-07-15 13:56:12.098433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4d9b0 (9): Bad file descriptor 00:25:45.708 [2024-07-15 13:56:12.098445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.708 [2024-07-15 13:56:12.098451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.708 [2024-07-15 13:56:12.098459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.708 [2024-07-15 13:56:12.098491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.708 [2024-07-15 13:56:12.107895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.708 [2024-07-15 13:56:12.108169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.708 [2024-07-15 13:56:12.108194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4d9b0 with addr=10.0.0.2, port=4420 00:25:45.708 [2024-07-15 13:56:12.108202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4d9b0 is same with the state(5) to be set 00:25:45.708 [2024-07-15 13:56:12.108216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4d9b0 (9): Bad file descriptor 00:25:45.708 [2024-07-15 13:56:12.108228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.708 [2024-07-15 13:56:12.108235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.708 [2024-07-15 13:56:12.108242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.708 [2024-07-15 13:56:12.108259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:45.708 [2024-07-15 13:56:12.117951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.708 [2024-07-15 13:56:12.118454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.708 [2024-07-15 13:56:12.118492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4d9b0 with addr=10.0.0.2, port=4420 00:25:45.708 [2024-07-15 13:56:12.118503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4d9b0 is same with the state(5) to be set 00:25:45.708 [2024-07-15 13:56:12.118521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4d9b0 (9): Bad file descriptor 00:25:45.708 [2024-07-15 13:56:12.118550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.708 [2024-07-15 13:56:12.118558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.708 [2024-07-15 13:56:12.118566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.708 [2024-07-15 13:56:12.118581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.708 [2024-07-15 13:56:12.128006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.708 [2024-07-15 13:56:12.128445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.708 [2024-07-15 13:56:12.128460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4d9b0 with addr=10.0.0.2, port=4420 00:25:45.708 [2024-07-15 13:56:12.128468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4d9b0 is same with the state(5) to be set 00:25:45.708 [2024-07-15 13:56:12.128479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4d9b0 (9): Bad file descriptor 00:25:45.708 [2024-07-15 13:56:12.128490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.708 [2024-07-15 13:56:12.128496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.708 [2024-07-15 13:56:12.128503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.708 [2024-07-15 13:56:12.128513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.708 [2024-07-15 13:56:12.138063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.708 [2024-07-15 13:56:12.138491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.708 [2024-07-15 13:56:12.138506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4d9b0 with addr=10.0.0.2, port=4420 00:25:45.708 [2024-07-15 13:56:12.138513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4d9b0 is same with the state(5) to be set 00:25:45.708 [2024-07-15 13:56:12.138524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4d9b0 (9): Bad file descriptor 00:25:45.708 [2024-07-15 13:56:12.138534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.708 [2024-07-15 13:56:12.138541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.708 [2024-07-15 13:56:12.138548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.708 [2024-07-15 13:56:12.138558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.708 [2024-07-15 13:56:12.148125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.708 [2024-07-15 13:56:12.148430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.708 [2024-07-15 13:56:12.148442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4d9b0 with addr=10.0.0.2, port=4420 00:25:45.708 [2024-07-15 13:56:12.148450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4d9b0 is same with the state(5) to be set 00:25:45.708 [2024-07-15 13:56:12.148460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4d9b0 (9): Bad file descriptor 00:25:45.708 [2024-07-15 13:56:12.148470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.708 [2024-07-15 13:56:12.148477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.708 [2024-07-15 13:56:12.148483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.708 [2024-07-15 13:56:12.148494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.708 [2024-07-15 13:56:12.158175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.708 [2024-07-15 13:56:12.158545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.708 [2024-07-15 13:56:12.158557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4d9b0 with addr=10.0.0.2, port=4420 00:25:45.708 [2024-07-15 13:56:12.158564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4d9b0 is same with the state(5) to be set 00:25:45.708 [2024-07-15 13:56:12.158575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4d9b0 (9): Bad file descriptor 00:25:45.708 [2024-07-15 13:56:12.158585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.708 [2024-07-15 13:56:12.158591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.708 [2024-07-15 13:56:12.158598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.708 [2024-07-15 13:56:12.158608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:45.708 [2024-07-15 13:56:12.168226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:45.708 [2024-07-15 13:56:12.168620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.708 [2024-07-15 13:56:12.168633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4d9b0 with addr=10.0.0.2, port=4420 00:25:45.708 [2024-07-15 13:56:12.168640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4d9b0 is same with the state(5) to be set 00:25:45.708 [2024-07-15 13:56:12.168651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4d9b0 (9): Bad file descriptor 00:25:45.708 [2024-07-15 13:56:12.168661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.708 [2024-07-15 13:56:12.168667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.708 [2024-07-15 13:56:12.168674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.708 [2024-07-15 13:56:12.168685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.708 13:56:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:45.708 [2024-07-15 13:56:12.178277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.708 [2024-07-15 13:56:12.178674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.708 [2024-07-15 13:56:12.178687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4d9b0 with addr=10.0.0.2, port=4420 00:25:45.708 [2024-07-15 13:56:12.178694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4d9b0 is same with the state(5) to be set 00:25:45.708 [2024-07-15 13:56:12.178705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4d9b0 (9): Bad file descriptor 00:25:45.708 [2024-07-15 13:56:12.178715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.708 [2024-07-15 13:56:12.178721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.708 [2024-07-15 13:56:12.178728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.709 [2024-07-15 13:56:12.178738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.709 [2024-07-15 13:56:12.179073] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:45.709 [2024-07-15 13:56:12.179089] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:45.709 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.709 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:25:45.709 13:56:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.096 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.097 13:56:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.039 [2024-07-15 13:56:14.502200] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:48.039 [2024-07-15 13:56:14.502218] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:48.039 [2024-07-15 13:56:14.502231] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:48.301 [2024-07-15 13:56:14.630637] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:48.301 [2024-07-15 13:56:14.736688] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:48.301 [2024-07-15 13:56:14.736717] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.301 request: 00:25:48.301 { 00:25:48.301 "name": "nvme", 00:25:48.301 "trtype": "tcp", 00:25:48.301 "traddr": "10.0.0.2", 00:25:48.301 "adrfam": "ipv4", 00:25:48.301 "trsvcid": "8009", 00:25:48.301 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:48.301 "wait_for_attach": true, 00:25:48.301 "method": "bdev_nvme_start_discovery", 00:25:48.301 "req_id": 1 00:25:48.301 } 00:25:48.301 Got JSON-RPC error response 00:25:48.301 response: 00:25:48.301 { 00:25:48.301 "code": -17, 00:25:48.301 "message": "File exists" 00:25:48.301 } 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.301 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.562 request: 00:25:48.562 { 00:25:48.562 "name": "nvme_second", 00:25:48.562 "trtype": "tcp", 00:25:48.562 "traddr": "10.0.0.2", 00:25:48.562 "adrfam": "ipv4", 00:25:48.562 "trsvcid": "8009", 00:25:48.562 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:48.562 "wait_for_attach": true, 00:25:48.562 "method": "bdev_nvme_start_discovery", 00:25:48.562 "req_id": 1 00:25:48.562 } 00:25:48.562 Got JSON-RPC error response 00:25:48.562 response: 00:25:48.562 { 00:25:48.562 "code": -17, 00:25:48.562 "message": "File exists" 00:25:48.562 } 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:48.562 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:48.563 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:48.563 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:48.563 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:48.563 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:48.563 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:48.563 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:48.563 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.563 13:56:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.505 [2024-07-15 13:56:15.997448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.505 [2024-07-15 13:56:15.997477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4d480 with addr=10.0.0.2, port=8010 00:25:49.505 [2024-07-15 13:56:15.997491] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:49.505 [2024-07-15 13:56:15.997499] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:49.505 [2024-07-15 13:56:15.997506] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:50.891 [2024-07-15 13:56:16.999763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.891 [2024-07-15 13:56:16.999786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4d480 with addr=10.0.0.2, port=8010 00:25:50.891 [2024-07-15 13:56:16.999797] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:50.891 [2024-07-15 13:56:16.999804] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:50.891 [2024-07-15 13:56:16.999811] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:51.833 [2024-07-15 13:56:18.001708] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:51.833 request: 00:25:51.833 { 00:25:51.833 "name": "nvme_second", 00:25:51.833 "trtype": "tcp", 00:25:51.833 "traddr": "10.0.0.2", 00:25:51.833 "adrfam": "ipv4", 00:25:51.833 "trsvcid": "8010", 00:25:51.834 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:51.834 "wait_for_attach": false, 00:25:51.834 "attach_timeout_ms": 3000, 00:25:51.834 "method": "bdev_nvme_start_discovery", 00:25:51.834 "req_id": 1 00:25:51.834 } 00:25:51.834 Got JSON-RPC error response 00:25:51.834 response: 00:25:51.834 { 00:25:51.834 "code": -110, 00:25:51.834 "message": "Connection timed out" 00:25:51.834 } 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1211922 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:51.834 rmmod nvme_tcp 00:25:51.834 rmmod nvme_fabrics 00:25:51.834 rmmod nvme_keyring 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1211809 ']' 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1211809 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1211809 ']' 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1211809 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1211809 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1211809' 00:25:51.834 killing process with pid 1211809 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1211809 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1211809 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.834 13:56:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.405 13:56:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:54.405 00:25:54.405 real 0m20.690s 00:25:54.405 user 0m25.007s 00:25:54.405 sys 0m6.877s 00:25:54.405 13:56:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:54.405 13:56:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.405 ************************************ 00:25:54.405 END TEST nvmf_host_discovery 00:25:54.405 ************************************ 00:25:54.406 13:56:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:54.406 13:56:20 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:54.406 13:56:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:54.406 13:56:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:54.406 13:56:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:54.406 ************************************ 00:25:54.406 START TEST nvmf_host_multipath_status 00:25:54.406 ************************************ 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:54.406 * Looking for test storage... 00:25:54.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:54.406 13:56:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:01.061 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:01.061 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:01.061 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:01.061 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:01.061 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:01.323 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:01.323 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:01.323 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:01.323 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:01.323 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:01.323 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:01.323 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:01.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:26:01.323 00:26:01.323 --- 10.0.0.2 ping statistics --- 00:26:01.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.323 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:26:01.323 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:01.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:26:01.323 00:26:01.323 --- 10.0.0.1 ping statistics --- 00:26:01.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.323 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:26:01.323 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.323 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:01.323 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:01.323 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:01.323 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:01.323 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:01.323 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:01.323 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:01.323 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:01.584 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:01.584 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:01.584 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:01.584 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:01.584 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1218209 00:26:01.584 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1218209 00:26:01.584 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:01.584 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1218209 ']' 00:26:01.584 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.584 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:01.584 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.584 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:01.584 13:56:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:01.584 [2024-07-15 13:56:27.914000] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:01.584 [2024-07-15 13:56:27.914066] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.584 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.584 [2024-07-15 13:56:27.984817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:01.584 [2024-07-15 13:56:28.059461] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.584 [2024-07-15 13:56:28.059500] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.584 [2024-07-15 13:56:28.059508] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.584 [2024-07-15 13:56:28.059514] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.584 [2024-07-15 13:56:28.059520] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.584 [2024-07-15 13:56:28.059658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.584 [2024-07-15 13:56:28.059659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.155 13:56:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:02.155 13:56:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:02.155 13:56:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:02.155 13:56:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:02.155 13:56:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:02.415 13:56:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:02.415 13:56:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1218209 00:26:02.415 13:56:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:02.415 [2024-07-15 13:56:28.860009] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:02.415 13:56:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:02.676 Malloc0 00:26:02.676 13:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:02.938 13:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:02.938 13:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:03.200 [2024-07-15 13:56:29.492986] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.200 13:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:03.200 [2024-07-15 13:56:29.645332] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:03.200 13:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:03.200 13:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1218596 00:26:03.200 13:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:03.200 13:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1218596 /var/tmp/bdevperf.sock 00:26:03.200 13:56:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1218596 ']' 00:26:03.200 13:56:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:03.200 13:56:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:03.200 13:56:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:03.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:03.200 13:56:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:03.200 13:56:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:03.461 13:56:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:03.461 13:56:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:03.461 13:56:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:03.721 13:56:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:03.981 Nvme0n1 00:26:03.981 13:56:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:04.242 Nvme0n1 00:26:04.242 13:56:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:04.242 13:56:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:06.786 13:56:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:06.786 13:56:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:06.786 13:56:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:06.786 13:56:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:07.728 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:07.728 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:07.728 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.728 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:07.989 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.989 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:07.989 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.989 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.989 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.989 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.989 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:07.989 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.250 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.250 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:08.250 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.250 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:08.250 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.250 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:08.511 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.511 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:08.511 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.511 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:08.511 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.511 13:56:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:08.775 13:56:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.775 13:56:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:08.776 13:56:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:08.776 13:56:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:09.036 13:56:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:09.977 13:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:09.977 13:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:09.977 13:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.977 13:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.238 13:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.238 13:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:10.238 13:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.238 13:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:10.501 13:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.501 13:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:10.501 13:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.501 13:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:10.501 13:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.501 13:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:10.501 13:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.501 13:56:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:10.761 13:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.761 13:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:10.761 13:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.761 13:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:11.022 13:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.022 13:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:11.022 13:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.022 13:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:11.022 13:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.022 13:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:11.022 13:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:11.282 13:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:11.543 13:56:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:12.485 13:56:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:12.485 13:56:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:12.485 13:56:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.485 13:56:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:12.485 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.485 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:12.745 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.745 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:12.745 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:12.745 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:12.745 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.745 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:13.005 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.005 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:13.005 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.005 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:13.005 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.005 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:13.005 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.005 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:13.266 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.266 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:13.266 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.266 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:13.526 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.526 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:13.526 13:56:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:13.526 13:56:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:13.786 13:56:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:14.725 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:14.725 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:14.725 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.725 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:14.984 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.984 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:14.984 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.984 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:15.243 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.243 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:15.243 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.243 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:15.243 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.243 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:15.243 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.243 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:15.503 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.503 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:15.503 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.503 13:56:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:15.762 13:56:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.762 13:56:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:15.762 13:56:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.762 13:56:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:15.762 13:56:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.762 13:56:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:15.762 13:56:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:16.021 13:56:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:16.294 13:56:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:17.254 13:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:17.254 13:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:17.254 13:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.254 13:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:17.254 13:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.254 13:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:17.254 13:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.254 13:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:17.513 13:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.513 13:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:17.513 13:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.513 13:56:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:17.773 13:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.773 13:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:17.773 13:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.773 13:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:17.773 13:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.773 13:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:17.773 13:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.773 13:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:18.033 13:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.033 13:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:18.033 13:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.033 13:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:18.033 13:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.033 13:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:18.033 13:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:18.320 13:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:18.579 13:56:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:19.519 13:56:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:19.519 13:56:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:19.519 13:56:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.519 13:56:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:19.780 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:19.780 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:19.780 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.780 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:19.780 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.780 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:19.780 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.780 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:20.040 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.040 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:20.040 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.040 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:20.040 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.040 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:20.040 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.040 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:20.300 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:20.300 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:20.300 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.300 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:20.559 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.559 13:56:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:20.559 13:56:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:20.559 13:56:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:20.820 13:56:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:21.079 13:56:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:22.020 13:56:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:22.020 13:56:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:22.020 13:56:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.020 13:56:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:22.280 13:56:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.280 13:56:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:22.280 13:56:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.280 13:56:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:22.280 13:56:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.280 13:56:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:22.280 13:56:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.280 13:56:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:22.541 13:56:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.541 13:56:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:22.541 13:56:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.541 13:56:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:22.541 13:56:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.541 13:56:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:22.541 13:56:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.541 13:56:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:22.801 13:56:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.801 13:56:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:22.801 13:56:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.801 13:56:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:23.061 13:56:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.061 13:56:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:23.061 13:56:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:23.062 13:56:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:23.322 13:56:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:24.263 13:56:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:24.263 13:56:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:24.263 13:56:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.263 13:56:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:24.523 13:56:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.523 13:56:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:24.523 13:56:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.523 13:56:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:24.784 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.784 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:24.784 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.784 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:24.784 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.784 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:24.784 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.784 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:25.044 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.044 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:25.044 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.044 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:25.044 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.044 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:25.044 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.044 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:25.304 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.305 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:25.305 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:25.565 13:56:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:25.565 13:56:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:26.975 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:26.975 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:26.975 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.975 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:26.975 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.975 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:26.975 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.975 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:26.975 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.975 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:26.975 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.975 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:27.235 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.235 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:27.235 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.235 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:27.235 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.235 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:27.235 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:27.235 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.495 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.495 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:27.495 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.495 13:56:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:27.756 13:56:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.756 13:56:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:27.756 13:56:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:27.756 13:56:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:28.015 13:56:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:28.954 13:56:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:28.954 13:56:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:28.954 13:56:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.954 13:56:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:29.215 13:56:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.215 13:56:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:29.215 13:56:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.215 13:56:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:29.215 13:56:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:29.215 13:56:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:29.215 13:56:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:29.215 13:56:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.475 13:56:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.475 13:56:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:29.475 13:56:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.475 13:56:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:29.735 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.735 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:29.735 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.735 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:29.735 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.735 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:29.735 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.735 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:30.018 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.018 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1218596 00:26:30.018 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1218596 ']' 00:26:30.018 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1218596 00:26:30.018 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:30.018 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:30.018 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1218596 00:26:30.018 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:30.018 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:30.018 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1218596' 00:26:30.018 killing process with pid 1218596 00:26:30.018 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1218596 00:26:30.018 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1218596 00:26:30.018 Connection closed with partial response: 00:26:30.018 00:26:30.018 00:26:30.302 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1218596 00:26:30.302 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:30.302 [2024-07-15 13:56:29.689854] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:30.302 [2024-07-15 13:56:29.689908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1218596 ] 00:26:30.302 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.302 [2024-07-15 13:56:29.741555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.302 [2024-07-15 13:56:29.794509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:30.302 Running I/O for 90 seconds... 00:26:30.302 [2024-07-15 13:56:42.377271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.302 [2024-07-15 13:56:42.377302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:30.302 [2024-07-15 13:56:42.377320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.302 [2024-07-15 13:56:42.377325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.302 [2024-07-15 13:56:42.377336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.302 [2024-07-15 13:56:42.377341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:30.302 [2024-07-15 13:56:42.377351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.302 [2024-07-15 13:56:42.377356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:30.302 [2024-07-15 13:56:42.377366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.302 [2024-07-15 13:56:42.377371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.302 [2024-07-15 13:56:42.377381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.302 [2024-07-15 13:56:42.377386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.302 [2024-07-15 13:56:42.377396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.302 [2024-07-15 13:56:42.377401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.302 [2024-07-15 13:56:42.377412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.302 [2024-07-15 13:56:42.377417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:30.302 [2024-07-15 13:56:42.377601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.302 [2024-07-15 13:56:42.377609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:30.302 [2024-07-15 13:56:42.377620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.302 [2024-07-15 13:56:42.377625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:30.302 [2024-07-15 13:56:42.377635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.302 [2024-07-15 13:56:42.377645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:30.302 [2024-07-15 13:56:42.377656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.302 [2024-07-15 13:56:42.377661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:30.302 [2024-07-15 13:56:42.377672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.302 [2024-07-15 13:56:42.377677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:30.302 [2024-07-15 13:56:42.377687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.302 [2024-07-15 13:56:42.377692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:30.302 [2024-07-15 13:56:42.377702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.302 [2024-07-15 13:56:42.377708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.377718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.303 [2024-07-15 13:56:42.377724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.377734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.377740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.377750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.377755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.377766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.377771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.377781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.377786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.377796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.377801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.377811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.377815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.377825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.377831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.377842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.377847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.377857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.377862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.377997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.378543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.303 [2024-07-15 13:56:42.378558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.303 [2024-07-15 13:56:42.378573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.303 [2024-07-15 13:56:42.378588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.303 [2024-07-15 13:56:42.378603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.303 [2024-07-15 13:56:42.378618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.303 [2024-07-15 13:56:42.378634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.378644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.303 [2024-07-15 13:56:42.378649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.303 [2024-07-15 13:56:42.379931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.303 [2024-07-15 13:56:42.379938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.379948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.379953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.379964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.379969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.379979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.379985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.379996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.304 [2024-07-15 13:56:42.380669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.304 [2024-07-15 13:56:42.380686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.304 [2024-07-15 13:56:42.380702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.304 [2024-07-15 13:56:42.380712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.380717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.380727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.380732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.380742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.380748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.380758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.380763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.380773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.380778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.380788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.380793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.381157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.381172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.381187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.381202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.381218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.381233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.381248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.381263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.381280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.381295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.381310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.381326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.381341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.381356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.381371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.305 [2024-07-15 13:56:42.381386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.305 [2024-07-15 13:56:42.381740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:30.305 [2024-07-15 13:56:42.381750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.381755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.381765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.381770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.381781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.381786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.381796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.306 [2024-07-15 13:56:42.381801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.381811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.381816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.381957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.381964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.381974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.381979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.381989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.381995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.306 [2024-07-15 13:56:42.382946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:30.306 [2024-07-15 13:56:42.382956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.382961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.382971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.382976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.382986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.382991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.383001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.307 [2024-07-15 13:56:42.383006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.383016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.307 [2024-07-15 13:56:42.383020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.383031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.307 [2024-07-15 13:56:42.383036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.383046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.307 [2024-07-15 13:56:42.383050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.383060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.307 [2024-07-15 13:56:42.383065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.383077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.307 [2024-07-15 13:56:42.383082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.383093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.307 [2024-07-15 13:56:42.383097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.383218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.383225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.383236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.383240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.383250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.383259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.383269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.383273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.383283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.383288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.393585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.393602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.393617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.393757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.393774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.393793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.393808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.393824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.393839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.393854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.393868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.393883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.393898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.393913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.393928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.393943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.393957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.393974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.393988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.393998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.394003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.394013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.394018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.394028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.394033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.394043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.394048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.394058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.394063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.394073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.394077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:30.307 [2024-07-15 13:56:42.394087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.307 [2024-07-15 13:56:42.394092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.308 [2024-07-15 13:56:42.394107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.308 [2024-07-15 13:56:42.394127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.308 [2024-07-15 13:56:42.394143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.308 [2024-07-15 13:56:42.394159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.308 [2024-07-15 13:56:42.394175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.308 [2024-07-15 13:56:42.394190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.308 [2024-07-15 13:56:42.394205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.308 [2024-07-15 13:56:42.394220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.308 [2024-07-15 13:56:42.394235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.308 [2024-07-15 13:56:42.394372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.308 [2024-07-15 13:56:42.394387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.308 [2024-07-15 13:56:42.394402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.308 [2024-07-15 13:56:42.394417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.308 [2024-07-15 13:56:42.394432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.308 [2024-07-15 13:56:42.394447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.308 [2024-07-15 13:56:42.394461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.308 [2024-07-15 13:56:42.394477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.308 [2024-07-15 13:56:42.394719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:30.308 [2024-07-15 13:56:42.394729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.308 [2024-07-15 13:56:42.394734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.394744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.394749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.394759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.394764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.394774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.394779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.394789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.394794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.394804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.394809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.394819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.394824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.394834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.394839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.394849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.394854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.394864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.394869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.394879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.394884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.394894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.394900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.394910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.394915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.394925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.394930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.394940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.394945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.394955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.309 [2024-07-15 13:56:42.394959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.394969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.394974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.394985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.394989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.394999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.395004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.395014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.395019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.395029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.395034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.395043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.395048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.395058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.395063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.395074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.395079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.395090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.395097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.395107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.395112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.395125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.395130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.395141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.395146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.395156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.395161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.395171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.395176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.395186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.395192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.395202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.395207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.395217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.395222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.395232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.395237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:30.309 [2024-07-15 13:56:42.395247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.309 [2024-07-15 13:56:42.395252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.395268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.395284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.395300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.395315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.395330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.395346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.395362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.395377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.395393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.395408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.395423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.395438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.395454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.395470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.310 [2024-07-15 13:56:42.395486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.310 [2024-07-15 13:56:42.395501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.310 [2024-07-15 13:56:42.395516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.310 [2024-07-15 13:56:42.395531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.310 [2024-07-15 13:56:42.395547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.395557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.310 [2024-07-15 13:56:42.395562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.396403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.310 [2024-07-15 13:56:42.396415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.396428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.396433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.396444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.396449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.396459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.396464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.396474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.396479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.396489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.396497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.396507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.396512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.396523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.396528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.397402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.397410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.397420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.397426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.397436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.397441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.397451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.397456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.397467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.397472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.397482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.397487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.397497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.397503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.397513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.397518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.397528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.397533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.397543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.397548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.397562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.397567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.397577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.397582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.397592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.310 [2024-07-15 13:56:42.397597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:30.310 [2024-07-15 13:56:42.397607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.397612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.397622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.397627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.397637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.397641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.397652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.397657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.397807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.397814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.397825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.397830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.397840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.397845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.397855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.397860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.397870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.397875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.397887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.397892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.397902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.397906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.397916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.397921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.397932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.397936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.397946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.397951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.397962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.397966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.397976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.397981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.397991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.397996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.398006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.398011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.398021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.398026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.398036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.398041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.398051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.311 [2024-07-15 13:56:42.404833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.404868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.311 [2024-07-15 13:56:42.404878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.404889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.311 [2024-07-15 13:56:42.404894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.404904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.311 [2024-07-15 13:56:42.404909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.404919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.311 [2024-07-15 13:56:42.404924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.404935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.311 [2024-07-15 13:56:42.404939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.404950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.311 [2024-07-15 13:56:42.404955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.404965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.311 [2024-07-15 13:56:42.404970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.404980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.404985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.311 [2024-07-15 13:56:42.404995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.311 [2024-07-15 13:56:42.405000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.312 [2024-07-15 13:56:42.405393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.312 [2024-07-15 13:56:42.405408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.312 [2024-07-15 13:56:42.405423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.312 [2024-07-15 13:56:42.405438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.312 [2024-07-15 13:56:42.405453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.312 [2024-07-15 13:56:42.405468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.312 [2024-07-15 13:56:42.405483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.312 [2024-07-15 13:56:42.405498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.312 [2024-07-15 13:56:42.405514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.312 [2024-07-15 13:56:42.405528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.312 [2024-07-15 13:56:42.405546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.312 [2024-07-15 13:56:42.405561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.312 [2024-07-15 13:56:42.405575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.312 [2024-07-15 13:56:42.405591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.312 [2024-07-15 13:56:42.405605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.312 [2024-07-15 13:56:42.405621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.312 [2024-07-15 13:56:42.405864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:30.312 [2024-07-15 13:56:42.405875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.312 [2024-07-15 13:56:42.405880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.405890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.405895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.405905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.405911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.405921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.405926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.405936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.405941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.405951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.405956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.405966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.405971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.405981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.405986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.405996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.313 [2024-07-15 13:56:42.406368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.313 [2024-07-15 13:56:42.406383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.313 [2024-07-15 13:56:42.406398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.313 [2024-07-15 13:56:42.406413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:30.313 [2024-07-15 13:56:42.406423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.313 [2024-07-15 13:56:42.406428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.314 [2024-07-15 13:56:42.406443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.314 [2024-07-15 13:56:42.406458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.314 [2024-07-15 13:56:42.406475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.406813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.406818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.407504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.407518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.407530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.407535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.407545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.407550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.407560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.407565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.407576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.407580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.407590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.407595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.407606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.407610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.407620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.407625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.407635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.407641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.407652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.407657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.407667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.407672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.407682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.407687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.314 [2024-07-15 13:56:42.407696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.314 [2024-07-15 13:56:42.407702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.407713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.315 [2024-07-15 13:56:42.407718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.407728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.315 [2024-07-15 13:56:42.407733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.407743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.315 [2024-07-15 13:56:42.407748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.407758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.315 [2024-07-15 13:56:42.407763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.407773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.315 [2024-07-15 13:56:42.407778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.407788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.315 [2024-07-15 13:56:42.407794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.407805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.315 [2024-07-15 13:56:42.407810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.407822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.315 [2024-07-15 13:56:42.407828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.407840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.315 [2024-07-15 13:56:42.407845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.407856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.315 [2024-07-15 13:56:42.407863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.407875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.315 [2024-07-15 13:56:42.407880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.407892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.315 [2024-07-15 13:56:42.407898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.407912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.315 [2024-07-15 13:56:42.407918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.407929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.315 [2024-07-15 13:56:42.407934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.407944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.315 [2024-07-15 13:56:42.407949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.407960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.315 [2024-07-15 13:56:42.407965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.407975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.315 [2024-07-15 13:56:42.407980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.408205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.315 [2024-07-15 13:56:42.408212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.408224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.315 [2024-07-15 13:56:42.408229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.408239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.315 [2024-07-15 13:56:42.408244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.408254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.315 [2024-07-15 13:56:42.408259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.408269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.315 [2024-07-15 13:56:42.408274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.408284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.315 [2024-07-15 13:56:42.408290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.408300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.315 [2024-07-15 13:56:42.408305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.408315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.315 [2024-07-15 13:56:42.408322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.408333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.315 [2024-07-15 13:56:42.408337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.408348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.315 [2024-07-15 13:56:42.408353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.408363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.315 [2024-07-15 13:56:42.408368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.408378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.315 [2024-07-15 13:56:42.408383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:30.315 [2024-07-15 13:56:42.408393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.316 [2024-07-15 13:56:42.408398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.316 [2024-07-15 13:56:42.408413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.316 [2024-07-15 13:56:42.408428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.316 [2024-07-15 13:56:42.408444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.316 [2024-07-15 13:56:42.408459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.316 [2024-07-15 13:56:42.408474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.316 [2024-07-15 13:56:42.408490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.316 [2024-07-15 13:56:42.408898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.408971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.408977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.409115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.409126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.409137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.409142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.409152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.409157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.409167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.409172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.409184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.409189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.409199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.409204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.409214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.409219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.414118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.414143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.414256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.316 [2024-07-15 13:56:42.414265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:30.316 [2024-07-15 13:56:42.414276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.317 [2024-07-15 13:56:42.414572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.317 [2024-07-15 13:56:42.414587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.317 [2024-07-15 13:56:42.414602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.317 [2024-07-15 13:56:42.414617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.317 [2024-07-15 13:56:42.414633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.317 [2024-07-15 13:56:42.414648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.317 [2024-07-15 13:56:42.414663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.317 [2024-07-15 13:56:42.414876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.317 [2024-07-15 13:56:42.414886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.414891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.414901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.414905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.414916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.414920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.414930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.414935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.414947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.414952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.414962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.414967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.414977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.414982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.414992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.414997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.318 [2024-07-15 13:56:42.415286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.318 [2024-07-15 13:56:42.415302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.318 [2024-07-15 13:56:42.415318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.318 [2024-07-15 13:56:42.415333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.318 [2024-07-15 13:56:42.415349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.318 [2024-07-15 13:56:42.415364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.318 [2024-07-15 13:56:42.415379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.318 [2024-07-15 13:56:42.415394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.318 [2024-07-15 13:56:42.415514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:30.318 [2024-07-15 13:56:42.415525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.319 [2024-07-15 13:56:42.415529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.319 [2024-07-15 13:56:42.415544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.319 [2024-07-15 13:56:42.415560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.319 [2024-07-15 13:56:42.415575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.319 [2024-07-15 13:56:42.415590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.319 [2024-07-15 13:56:42.415605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.319 [2024-07-15 13:56:42.415620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.319 [2024-07-15 13:56:42.415635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.319 [2024-07-15 13:56:42.415650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.319 [2024-07-15 13:56:42.415665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.319 [2024-07-15 13:56:42.415681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.319 [2024-07-15 13:56:42.415697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.319 [2024-07-15 13:56:42.415712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.319 [2024-07-15 13:56:42.415727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.319 [2024-07-15 13:56:42.415742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.319 [2024-07-15 13:56:42.415757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.415773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.415789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.415804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.415821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.415836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.415851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.415867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.415883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.415898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.415913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.415929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.415943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.415959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.415973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.415989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.415999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.319 [2024-07-15 13:56:42.416003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.416013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.416018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.416029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.416033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.416043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.416048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.416059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.416065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.416916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.416929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.416941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.416946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.416956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.416961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.416972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.416977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.416987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.416993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.417003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.319 [2024-07-15 13:56:42.417008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:30.319 [2024-07-15 13:56:42.417019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.320 [2024-07-15 13:56:42.417876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.320 [2024-07-15 13:56:42.417892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.320 [2024-07-15 13:56:42.417907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.320 [2024-07-15 13:56:42.417922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.320 [2024-07-15 13:56:42.417937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.320 [2024-07-15 13:56:42.417953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.320 [2024-07-15 13:56:42.417969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.417988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.417998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.418003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.418013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.418017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.418028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.418033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.418043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.418048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.320 [2024-07-15 13:56:42.418058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.320 [2024-07-15 13:56:42.418063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.418778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.418784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.419189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.419196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.419206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.419211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.419221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.419226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.419238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.419243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.419253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.419258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.419268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.321 [2024-07-15 13:56:42.419273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.419283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.321 [2024-07-15 13:56:42.419288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.419298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.321 [2024-07-15 13:56:42.419303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.419313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.321 [2024-07-15 13:56:42.419318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.419328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.321 [2024-07-15 13:56:42.419333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.419343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.321 [2024-07-15 13:56:42.419348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.419358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.321 [2024-07-15 13:56:42.419363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.419373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.321 [2024-07-15 13:56:42.419378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.321 [2024-07-15 13:56:42.419388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.322 [2024-07-15 13:56:42.419393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.419403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.419408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.419419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.419425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.419895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.419901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.419912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.419917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.419927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.419932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.419942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.419947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.419957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.419962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.419972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.419977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.419987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.322 [2024-07-15 13:56:42.419992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.322 [2024-07-15 13:56:42.420007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.322 [2024-07-15 13:56:42.420022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.322 [2024-07-15 13:56:42.420037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.322 [2024-07-15 13:56:42.420052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.322 [2024-07-15 13:56:42.420068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.322 [2024-07-15 13:56:42.420083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.322 [2024-07-15 13:56:42.420098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.322 [2024-07-15 13:56:42.420113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.322 [2024-07-15 13:56:42.420133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.322 [2024-07-15 13:56:42.420148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.322 [2024-07-15 13:56:42.420277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.322 [2024-07-15 13:56:42.420293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.322 [2024-07-15 13:56:42.420308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.322 [2024-07-15 13:56:42.420323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.322 [2024-07-15 13:56:42.420338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.420353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.420368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.420464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.420479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.420494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.420510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.420525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.420540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.420555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.420570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.420585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.420600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.420615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.420630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.420648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.322 [2024-07-15 13:56:42.420663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.322 [2024-07-15 13:56:42.420678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:30.322 [2024-07-15 13:56:42.420688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.420693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.420704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.420709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.420874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.420880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.420891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.420896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.420906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.420911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.420921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.420926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.420936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.420941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.420951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.420956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.420966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.420971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.420983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.420988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.421199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.421206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.421216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.421221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.421231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.421236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.421246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.421251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.421261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.421266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.421277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.421282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.421292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.421297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.421307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.421312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.421494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.421500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.421511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.421515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.421526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.421531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.421541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.421547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.421557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.421562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.421572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.421577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.421588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.421592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.421603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.421608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.421973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.421980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.421991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.421995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.422006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.422011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.422022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.422027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.422037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.422042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.422053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.422058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.422068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.323 [2024-07-15 13:56:42.422073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.422083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.323 [2024-07-15 13:56:42.422090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.422100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.323 [2024-07-15 13:56:42.422105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.422115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.323 [2024-07-15 13:56:42.422120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.422135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.323 [2024-07-15 13:56:42.422140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.422150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.323 [2024-07-15 13:56:42.422155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.422165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.323 [2024-07-15 13:56:42.422170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.422180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.422185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.422196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.422201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.323 [2024-07-15 13:56:42.422314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.323 [2024-07-15 13:56:42.422321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.422986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.422991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.423002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.423007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.423200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.423207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.423218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.423223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.423233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.423238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.423249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.423254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.423264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.423269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.423279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.423285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.423295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.423300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.423311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.423316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.423697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.423703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.423714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.423719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.423729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.423734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.423744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.324 [2024-07-15 13:56:42.423749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:30.324 [2024-07-15 13:56:42.423759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.325 [2024-07-15 13:56:42.423764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.423774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.325 [2024-07-15 13:56:42.423779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.423789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.423794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.423804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.423809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.423819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.423824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.423834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.423839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.423851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.423856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.423866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.423871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.423882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.423886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.423897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.423902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.423912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.325 [2024-07-15 13:56:42.423917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.423927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.325 [2024-07-15 13:56:42.423932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.325 [2024-07-15 13:56:42.424415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.325 [2024-07-15 13:56:42.424431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.325 [2024-07-15 13:56:42.424446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.325 [2024-07-15 13:56:42.424461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.325 [2024-07-15 13:56:42.424476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.325 [2024-07-15 13:56:42.424494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.424514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.424530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.424545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.424560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.424575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.424590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.424606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.424621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.424636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.424651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.424667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.424796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.424814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.424829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.424844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.325 [2024-07-15 13:56:42.424860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.325 [2024-07-15 13:56:42.424874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.424885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.325 [2024-07-15 13:56:42.424890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.425023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.325 [2024-07-15 13:56:42.425030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.425040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.325 [2024-07-15 13:56:42.425046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.425056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.325 [2024-07-15 13:56:42.425061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.425071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.325 [2024-07-15 13:56:42.425076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.425086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.325 [2024-07-15 13:56:42.425091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.425102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.325 [2024-07-15 13:56:42.425107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:30.325 [2024-07-15 13:56:42.425117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.326 [2024-07-15 13:56:42.425235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.425769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.425774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.426012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.426019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.426030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.426034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.426045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.426050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.426060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.426065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.426075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.426080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.426090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.426095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.426106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.426110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.426125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.426130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.426488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.426495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.426506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.426511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.426523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.426528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.426538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.426543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.426553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.426558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.426568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.326 [2024-07-15 13:56:42.426573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:30.326 [2024-07-15 13:56:42.426583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.327 [2024-07-15 13:56:42.426589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.426599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.327 [2024-07-15 13:56:42.426604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.426614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.327 [2024-07-15 13:56:42.426619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.426629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.327 [2024-07-15 13:56:42.426634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.426644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.327 [2024-07-15 13:56:42.426649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.426660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.327 [2024-07-15 13:56:42.426665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.426675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.327 [2024-07-15 13:56:42.426680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.426690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.426695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.426706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.426713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.426826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.426832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.426843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.426848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.426858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.426863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.426873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.426878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.426888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.426893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.426903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.426908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.426918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.426924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.426934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.426939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.427806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.427811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.327 [2024-07-15 13:56:42.428197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.327 [2024-07-15 13:56:42.428204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-07-15 13:56:42.428220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-07-15 13:56:42.428235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-07-15 13:56:42.428250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-07-15 13:56:42.428267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-07-15 13:56:42.428282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.428297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.428312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.428328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.428343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.428359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.428374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.428389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.428404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-07-15 13:56:42.428419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-07-15 13:56:42.428434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-07-15 13:56:42.428911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-07-15 13:56:42.428927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-07-15 13:56:42.428942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-07-15 13:56:42.428956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-07-15 13:56:42.428972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.328 [2024-07-15 13:56:42.428987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.428997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.429002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.429012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.429017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.429027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.429032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.429042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.429047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.429058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.429063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.429073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.429078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.429088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.429094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.429104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.429109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.429119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.429127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.429137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.429142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.429153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.429158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.429281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.429287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.429298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.429302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.429313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.429318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.328 [2024-07-15 13:56:42.429328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.328 [2024-07-15 13:56:42.429333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.329 [2024-07-15 13:56:42.429348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.329 [2024-07-15 13:56:42.429729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.429992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.429997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.430007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.430012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.430023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.430028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.430224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.430233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.430243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.430248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.430258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.430263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.430273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.430278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.430289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.430293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.430303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.430308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.430318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.430323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.430334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.430339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.430490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.430497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.430508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.430513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.430523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.430528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.430538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.430543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.430553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.430562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:30.329 [2024-07-15 13:56:42.430572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.329 [2024-07-15 13:56:42.430577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.430587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.430592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.430602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.430607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.430986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.430992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.330 [2024-07-15 13:56:42.431083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.330 [2024-07-15 13:56:42.431098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.330 [2024-07-15 13:56:42.431113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.330 [2024-07-15 13:56:42.431134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.330 [2024-07-15 13:56:42.431149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.330 [2024-07-15 13:56:42.431164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.330 [2024-07-15 13:56:42.431179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.431987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.431992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.432002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.432007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.432017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.432022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:30.330 [2024-07-15 13:56:42.432206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.330 [2024-07-15 13:56:42.432213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.432229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.432244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.432259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.432274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.432290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.432307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.432323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.432709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.432725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.432740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.432755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.432770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.432785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.432801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.432816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.432832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.432847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.432863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.432880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.432896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.432911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.432926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.432936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.432941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.433426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.433433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.433443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.433449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.433460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.433465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.433475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.433480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.433490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.433495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.433505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.331 [2024-07-15 13:56:42.433510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.433520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.433525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.433537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.433542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.433552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.433557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.433567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.433572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.433583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.433588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.433598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.433603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.433613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.433618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.433628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.433633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.433643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.433648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.433659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.433664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.433675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.433680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:30.331 [2024-07-15 13:56:42.433803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.331 [2024-07-15 13:56:42.433809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.433820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.332 [2024-07-15 13:56:42.433824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.433835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.332 [2024-07-15 13:56:42.433841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.433851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.332 [2024-07-15 13:56:42.433856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.433866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.332 [2024-07-15 13:56:42.433871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.433881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.433886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.433897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.433901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.332 [2024-07-15 13:56:42.434248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:30.332 [2024-07-15 13:56:42.434784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.332 [2024-07-15 13:56:42.434789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.434799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.434804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.434815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.434820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.434830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.434835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.434847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.434852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.333 [2024-07-15 13:56:42.435609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.333 [2024-07-15 13:56:42.435624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.333 [2024-07-15 13:56:42.435639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.333 [2024-07-15 13:56:42.435654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.333 [2024-07-15 13:56:42.435669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.333 [2024-07-15 13:56:42.435684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.333 [2024-07-15 13:56:42.435699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:30.333 [2024-07-15 13:56:42.435950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.333 [2024-07-15 13:56:42.435954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.436855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.436860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.437235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.437242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.437255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.437260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.437272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.437278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.437289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.437294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.437306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.437311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.437322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.334 [2024-07-15 13:56:42.437330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.437342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.334 [2024-07-15 13:56:42.437347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.437359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.334 [2024-07-15 13:56:42.437364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.437376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.334 [2024-07-15 13:56:42.437381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.437392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.334 [2024-07-15 13:56:42.437397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:30.334 [2024-07-15 13:56:42.437409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.437414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.437426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.437431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.437443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.437448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.437460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.437465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.437476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.335 [2024-07-15 13:56:42.437481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.437493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.335 [2024-07-15 13:56:42.437498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.437942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.335 [2024-07-15 13:56:42.437949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.437961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.335 [2024-07-15 13:56:42.437968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.437981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.335 [2024-07-15 13:56:42.437985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.437998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.335 [2024-07-15 13:56:42.438003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.335 [2024-07-15 13:56:42.438020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.335 [2024-07-15 13:56:42.438038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.438056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.438074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.438092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.438109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.438131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.438149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.438167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.438184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.438203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.438221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.438238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.438289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.438308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.438327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.438345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.335 [2024-07-15 13:56:42.438363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.335 [2024-07-15 13:56:42.438382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.335 [2024-07-15 13:56:42.438401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.335 [2024-07-15 13:56:42.438518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.335 [2024-07-15 13:56:42.438537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.335 [2024-07-15 13:56:42.438558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.335 [2024-07-15 13:56:42.438577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.335 [2024-07-15 13:56:42.438591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.335 [2024-07-15 13:56:42.438596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.438610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.438616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.438630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.438635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.438649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.438654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.438668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.438673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.438687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.438692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.438706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.438710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.438724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.438730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.438744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.438749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.438763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.336 [2024-07-15 13:56:42.438768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.438782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.438788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.438802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.438807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.438821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.438826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.438877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.438884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.438900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.438905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.438920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.438925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.438940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.438946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.438961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.438966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.438981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.438986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.439001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.439006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.439022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.439027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.439331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.439338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.439354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.439362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.439378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.439383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.439398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.439403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.439419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.439424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.439440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.439445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.439461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.439466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.439482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.439488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.439597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.439603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.439620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.439625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.439641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.439646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.439662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.439667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.439683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.439688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.439705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.439710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:30.336 [2024-07-15 13:56:42.439727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.336 [2024-07-15 13:56:42.439732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.439748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.337 [2024-07-15 13:56:42.439753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.337 [2024-07-15 13:56:42.440100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.337 [2024-07-15 13:56:42.440133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.337 [2024-07-15 13:56:42.440155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.337 [2024-07-15 13:56:42.440177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.337 [2024-07-15 13:56:42.440199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.337 [2024-07-15 13:56:42.440220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.337 [2024-07-15 13:56:42.440241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.337 [2024-07-15 13:56:42.440263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.337 [2024-07-15 13:56:42.440284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.337 [2024-07-15 13:56:42.440306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.337 [2024-07-15 13:56:42.440329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.337 [2024-07-15 13:56:42.440351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.337 [2024-07-15 13:56:42.440372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.337 [2024-07-15 13:56:42.440394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.337 [2024-07-15 13:56:42.440415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.337 [2024-07-15 13:56:42.440466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.337 [2024-07-15 13:56:42.440489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.337 [2024-07-15 13:56:42.440511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.337 [2024-07-15 13:56:42.440533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.337 [2024-07-15 13:56:42.440556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.337 [2024-07-15 13:56:42.440578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.337 [2024-07-15 13:56:42.440600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:42.440618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.337 [2024-07-15 13:56:42.440624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:54.362952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.337 [2024-07-15 13:56:54.362982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:54.363013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.337 [2024-07-15 13:56:54.363019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:54.363498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.337 [2024-07-15 13:56:54.363508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:30.337 [2024-07-15 13:56:54.363520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.338 [2024-07-15 13:56:54.363526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.338 [2024-07-15 13:56:54.363541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.338 [2024-07-15 13:56:54.363556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.338 [2024-07-15 13:56:54.363572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.338 [2024-07-15 13:56:54.363587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.338 [2024-07-15 13:56:54.363602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.338 [2024-07-15 13:56:54.363618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.338 [2024-07-15 13:56:54.363633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.338 [2024-07-15 13:56:54.363653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.338 [2024-07-15 13:56:54.363667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.338 [2024-07-15 13:56:54.363683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.338 [2024-07-15 13:56:54.363698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.338 [2024-07-15 13:56:54.363713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.338 [2024-07-15 13:56:54.363728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.338 [2024-07-15 13:56:54.363743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.338 [2024-07-15 13:56:54.363919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.338 [2024-07-15 13:56:54.363936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.338 [2024-07-15 13:56:54.363951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.338 [2024-07-15 13:56:54.363967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.338 [2024-07-15 13:56:54.363981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.363992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.338 [2024-07-15 13:56:54.363999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.364010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.338 [2024-07-15 13:56:54.364015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.364025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.338 [2024-07-15 13:56:54.364030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:30.338 [2024-07-15 13:56:54.364041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.338 [2024-07-15 13:56:54.364045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:30.338 Received shutdown signal, test time was about 25.560053 seconds 00:26:30.338 00:26:30.338 Latency(us) 00:26:30.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.338 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:30.338 Verification LBA range: start 0x0 length 0x4000 00:26:30.338 Nvme0n1 : 25.56 11265.03 44.00 0.00 0.00 11343.85 450.56 3075822.93 00:26:30.338 =================================================================================================================== 00:26:30.338 Total : 11265.03 44.00 0.00 0.00 11343.85 450.56 3075822.93 00:26:30.338 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:30.338 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:30.338 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:30.338 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:30.338 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:30.338 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:30.338 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:30.338 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:30.338 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:30.338 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:30.338 rmmod nvme_tcp 00:26:30.338 rmmod nvme_fabrics 00:26:30.338 rmmod nvme_keyring 00:26:30.338 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:30.338 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:30.338 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:30.338 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1218209 ']' 00:26:30.338 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1218209 00:26:30.338 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1218209 ']' 00:26:30.338 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1218209 00:26:30.338 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:30.339 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:30.339 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1218209 00:26:30.599 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:30.599 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:30.599 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1218209' 00:26:30.599 killing process with pid 1218209 00:26:30.599 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1218209 00:26:30.599 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1218209 00:26:30.599 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:30.599 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:30.599 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:30.599 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:30.599 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:30.599 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.599 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.599 13:56:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.156 13:56:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:33.156 00:26:33.156 real 0m38.599s 00:26:33.156 user 1m38.985s 00:26:33.156 sys 0m10.626s 00:26:33.156 13:56:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:33.156 13:56:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:33.156 ************************************ 00:26:33.156 END TEST nvmf_host_multipath_status 00:26:33.156 ************************************ 00:26:33.156 13:56:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:33.156 13:56:59 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:33.156 13:56:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:33.156 13:56:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:33.156 13:56:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:33.156 ************************************ 00:26:33.156 START TEST nvmf_discovery_remove_ifc 00:26:33.156 ************************************ 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:33.156 * Looking for test storage... 00:26:33.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:33.156 13:56:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:39.777 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.777 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:39.778 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:39.778 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:39.778 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:39.778 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.039 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.039 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.039 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:40.039 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.039 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.039 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.039 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:40.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:26:40.300 00:26:40.300 --- 10.0.0.2 ping statistics --- 00:26:40.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.300 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:26:40.300 00:26:40.300 --- 10.0.0.1 ping statistics --- 00:26:40.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.300 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1228243 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1228243 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1228243 ']' 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.300 13:57:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:40.300 [2024-07-15 13:57:06.685030] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:40.300 [2024-07-15 13:57:06.685096] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.300 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.300 [2024-07-15 13:57:06.772612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.561 [2024-07-15 13:57:06.867866] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.561 [2024-07-15 13:57:06.867921] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.561 [2024-07-15 13:57:06.867929] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.561 [2024-07-15 13:57:06.867936] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.561 [2024-07-15 13:57:06.867943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.561 [2024-07-15 13:57:06.867967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.133 13:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:41.133 13:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:41.133 13:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:41.133 13:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:41.133 13:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.133 13:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.133 13:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:41.133 13:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.133 13:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.133 [2024-07-15 13:57:07.523421] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.133 [2024-07-15 13:57:07.531591] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:41.133 null0 00:26:41.133 [2024-07-15 13:57:07.563583] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.133 13:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.133 13:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1228373 00:26:41.133 13:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1228373 /tmp/host.sock 00:26:41.133 13:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:41.133 13:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1228373 ']' 00:26:41.133 13:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:41.133 13:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:41.133 13:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:41.133 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:41.133 13:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:41.133 13:57:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.133 [2024-07-15 13:57:07.650052] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:41.133 [2024-07-15 13:57:07.650113] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1228373 ] 00:26:41.394 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.394 [2024-07-15 13:57:07.713712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.394 [2024-07-15 13:57:07.789137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.964 13:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:41.964 13:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:41.964 13:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:41.964 13:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:41.964 13:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.964 13:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.964 13:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.964 13:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:41.964 13:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.964 13:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.964 13:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.964 13:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:41.964 13:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.964 13:57:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.345 [2024-07-15 13:57:09.533345] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:43.345 [2024-07-15 13:57:09.533368] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:43.345 [2024-07-15 13:57:09.533383] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:43.346 [2024-07-15 13:57:09.622672] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:43.346 [2024-07-15 13:57:09.805669] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:43.346 [2024-07-15 13:57:09.805722] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:43.346 [2024-07-15 13:57:09.805746] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:43.346 [2024-07-15 13:57:09.805761] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:43.346 [2024-07-15 13:57:09.805784] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:43.346 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.346 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:43.346 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.346 [2024-07-15 13:57:09.811583] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xf6e7b0 was disconnected and freed. delete nvme_qpair. 00:26:43.346 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.346 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.346 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.346 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.346 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.346 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.346 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.346 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:43.346 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:43.605 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:43.605 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:43.605 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.605 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.605 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.605 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.605 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.605 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.605 13:57:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.605 13:57:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.605 13:57:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:43.605 13:57:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:44.546 13:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:44.546 13:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:44.546 13:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.546 13:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:44.546 13:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.546 13:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.546 13:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:44.546 13:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.807 13:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:44.807 13:57:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:45.773 13:57:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:45.773 13:57:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.773 13:57:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:45.773 13:57:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.773 13:57:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.773 13:57:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:45.773 13:57:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:45.773 13:57:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.773 13:57:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:45.773 13:57:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:46.714 13:57:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:46.714 13:57:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.714 13:57:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:46.714 13:57:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.714 13:57:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.714 13:57:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:46.714 13:57:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:46.714 13:57:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.714 13:57:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:46.714 13:57:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:48.097 13:57:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:48.097 13:57:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:48.097 13:57:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.097 13:57:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:48.097 13:57:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.097 13:57:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:48.097 13:57:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.097 13:57:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.097 13:57:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:48.097 13:57:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:49.039 [2024-07-15 13:57:15.246118] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:49.039 [2024-07-15 13:57:15.246171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.039 [2024-07-15 13:57:15.246184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.039 [2024-07-15 13:57:15.246194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.039 [2024-07-15 13:57:15.246201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.039 [2024-07-15 13:57:15.246209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.039 [2024-07-15 13:57:15.246216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.039 [2024-07-15 13:57:15.246224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.039 [2024-07-15 13:57:15.246231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.039 [2024-07-15 13:57:15.246239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.039 [2024-07-15 13:57:15.246250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.039 [2024-07-15 13:57:15.246258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf35040 is same with the state(5) to be set 00:26:49.039 [2024-07-15 13:57:15.256140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf35040 (9): Bad file descriptor 00:26:49.039 [2024-07-15 13:57:15.266180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:49.039 13:57:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:49.039 13:57:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.039 13:57:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:49.039 13:57:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:49.039 13:57:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.039 13:57:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:49.039 13:57:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.981 [2024-07-15 13:57:16.297147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:49.981 [2024-07-15 13:57:16.297185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf35040 with addr=10.0.0.2, port=4420 00:26:49.981 [2024-07-15 13:57:16.297196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf35040 is same with the state(5) to be set 00:26:49.981 [2024-07-15 13:57:16.297217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf35040 (9): Bad file descriptor 00:26:49.981 [2024-07-15 13:57:16.297578] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:49.981 [2024-07-15 13:57:16.297595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:49.981 [2024-07-15 13:57:16.297603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:49.981 [2024-07-15 13:57:16.297611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:49.981 [2024-07-15 13:57:16.297626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.981 [2024-07-15 13:57:16.297634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:49.981 13:57:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.981 13:57:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:49.981 13:57:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:50.922 [2024-07-15 13:57:17.300011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:50.922 [2024-07-15 13:57:17.300031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:50.922 [2024-07-15 13:57:17.300039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:50.922 [2024-07-15 13:57:17.300047] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:50.923 [2024-07-15 13:57:17.300059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:50.923 [2024-07-15 13:57:17.300079] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:50.923 [2024-07-15 13:57:17.300101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.923 [2024-07-15 13:57:17.300111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.923 [2024-07-15 13:57:17.300129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.923 [2024-07-15 13:57:17.300137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.923 [2024-07-15 13:57:17.300145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.923 [2024-07-15 13:57:17.300152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.923 [2024-07-15 13:57:17.300160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.923 [2024-07-15 13:57:17.300167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.923 [2024-07-15 13:57:17.300175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.923 [2024-07-15 13:57:17.300182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.923 [2024-07-15 13:57:17.300189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:50.923 [2024-07-15 13:57:17.300575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf344c0 (9): Bad file descriptor 00:26:50.923 [2024-07-15 13:57:17.301587] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:50.923 [2024-07-15 13:57:17.301598] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:50.923 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:50.923 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:50.923 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.923 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:50.923 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.923 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:50.923 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.923 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.923 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:50.923 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.923 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.183 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:51.183 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:51.183 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.183 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:51.183 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.183 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.183 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:51.183 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:51.183 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.183 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:51.183 13:57:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:52.125 13:57:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:52.125 13:57:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.125 13:57:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:52.125 13:57:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.125 13:57:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.125 13:57:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:52.125 13:57:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:52.125 13:57:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.125 13:57:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:52.125 13:57:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:53.066 [2024-07-15 13:57:19.316906] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:53.066 [2024-07-15 13:57:19.316927] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:53.066 [2024-07-15 13:57:19.316942] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:53.066 [2024-07-15 13:57:19.405224] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:53.066 13:57:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:53.326 13:57:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:53.326 13:57:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.326 13:57:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:53.326 13:57:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.326 13:57:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:53.326 13:57:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:53.326 13:57:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.326 [2024-07-15 13:57:19.632580] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:53.326 [2024-07-15 13:57:19.632618] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:53.326 [2024-07-15 13:57:19.632640] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:53.326 [2024-07-15 13:57:19.632655] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:53.326 [2024-07-15 13:57:19.632663] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:53.326 [2024-07-15 13:57:19.635886] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xf4b310 was disconnected and freed. delete nvme_qpair. 00:26:53.326 13:57:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:53.326 13:57:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1228373 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1228373 ']' 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1228373 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1228373 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1228373' 00:26:54.267 killing process with pid 1228373 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1228373 00:26:54.267 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1228373 00:26:54.529 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:54.529 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:54.529 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:54.529 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:54.529 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:54.529 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:54.529 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:54.529 rmmod nvme_tcp 00:26:54.529 rmmod nvme_fabrics 00:26:54.529 rmmod nvme_keyring 00:26:54.529 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:54.529 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:54.529 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:54.529 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1228243 ']' 00:26:54.529 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1228243 00:26:54.529 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1228243 ']' 00:26:54.529 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1228243 00:26:54.529 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:54.529 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:54.529 13:57:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1228243 00:26:54.529 13:57:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:54.529 13:57:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:54.529 13:57:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1228243' 00:26:54.529 killing process with pid 1228243 00:26:54.529 13:57:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1228243 00:26:54.529 13:57:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1228243 00:26:54.791 13:57:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:54.791 13:57:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:54.791 13:57:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:54.791 13:57:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:54.791 13:57:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:54.791 13:57:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.791 13:57:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:54.791 13:57:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.702 13:57:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:56.702 00:26:56.702 real 0m24.038s 00:26:56.702 user 0m29.282s 00:26:56.702 sys 0m6.767s 00:26:56.702 13:57:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:56.702 13:57:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.702 ************************************ 00:26:56.702 END TEST nvmf_discovery_remove_ifc 00:26:56.702 ************************************ 00:26:56.993 13:57:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:56.993 13:57:23 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:56.993 13:57:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:56.993 13:57:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:56.993 13:57:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:56.993 ************************************ 00:26:56.993 START TEST nvmf_identify_kernel_target 00:26:56.993 ************************************ 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:56.993 * Looking for test storage... 00:26:56.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:56.993 13:57:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:05.132 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:05.132 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:05.132 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:05.132 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.132 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:05.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:27:05.133 00:27:05.133 --- 10.0.0.2 ping statistics --- 00:27:05.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.133 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:27:05.133 00:27:05.133 --- 10.0.0.1 ping statistics --- 00:27:05.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.133 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:05.133 13:57:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:07.680 Waiting for block devices as requested 00:27:07.680 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:07.680 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:07.680 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:07.680 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:07.941 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:07.941 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:07.941 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:08.202 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:08.202 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:08.463 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:08.463 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:08.463 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:08.463 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:08.724 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:08.724 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:08.724 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:08.984 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:09.244 No valid GPT data, bailing 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:09.244 00:27:09.244 Discovery Log Number of Records 2, Generation counter 2 00:27:09.244 =====Discovery Log Entry 0====== 00:27:09.244 trtype: tcp 00:27:09.244 adrfam: ipv4 00:27:09.244 subtype: current discovery subsystem 00:27:09.244 treq: not specified, sq flow control disable supported 00:27:09.244 portid: 1 00:27:09.244 trsvcid: 4420 00:27:09.244 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:09.244 traddr: 10.0.0.1 00:27:09.244 eflags: none 00:27:09.244 sectype: none 00:27:09.244 =====Discovery Log Entry 1====== 00:27:09.244 trtype: tcp 00:27:09.244 adrfam: ipv4 00:27:09.244 subtype: nvme subsystem 00:27:09.244 treq: not specified, sq flow control disable supported 00:27:09.244 portid: 1 00:27:09.244 trsvcid: 4420 00:27:09.244 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:09.244 traddr: 10.0.0.1 00:27:09.244 eflags: none 00:27:09.244 sectype: none 00:27:09.244 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:09.244 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:09.244 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.506 ===================================================== 00:27:09.506 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:09.506 ===================================================== 00:27:09.506 Controller Capabilities/Features 00:27:09.506 ================================ 00:27:09.506 Vendor ID: 0000 00:27:09.506 Subsystem Vendor ID: 0000 00:27:09.506 Serial Number: cabdef4878eb59f4269c 00:27:09.506 Model Number: Linux 00:27:09.506 Firmware Version: 6.7.0-68 00:27:09.506 Recommended Arb Burst: 0 00:27:09.506 IEEE OUI Identifier: 00 00 00 00:27:09.506 Multi-path I/O 00:27:09.506 May have multiple subsystem ports: No 00:27:09.506 May have multiple controllers: No 00:27:09.506 Associated with SR-IOV VF: No 00:27:09.506 Max Data Transfer Size: Unlimited 00:27:09.506 Max Number of Namespaces: 0 00:27:09.506 Max Number of I/O Queues: 1024 00:27:09.506 NVMe Specification Version (VS): 1.3 00:27:09.506 NVMe Specification Version (Identify): 1.3 00:27:09.506 Maximum Queue Entries: 1024 00:27:09.506 Contiguous Queues Required: No 00:27:09.506 Arbitration Mechanisms Supported 00:27:09.506 Weighted Round Robin: Not Supported 00:27:09.506 Vendor Specific: Not Supported 00:27:09.506 Reset Timeout: 7500 ms 00:27:09.506 Doorbell Stride: 4 bytes 00:27:09.506 NVM Subsystem Reset: Not Supported 00:27:09.506 Command Sets Supported 00:27:09.506 NVM Command Set: Supported 00:27:09.506 Boot Partition: Not Supported 00:27:09.506 Memory Page Size Minimum: 4096 bytes 00:27:09.506 Memory Page Size Maximum: 4096 bytes 00:27:09.506 Persistent Memory Region: Not Supported 00:27:09.506 Optional Asynchronous Events Supported 00:27:09.506 Namespace Attribute Notices: Not Supported 00:27:09.506 Firmware Activation Notices: Not Supported 00:27:09.506 ANA Change Notices: Not Supported 00:27:09.506 PLE Aggregate Log Change Notices: Not Supported 00:27:09.506 LBA Status Info Alert Notices: Not Supported 00:27:09.506 EGE Aggregate Log Change Notices: Not Supported 00:27:09.506 Normal NVM Subsystem Shutdown event: Not Supported 00:27:09.506 Zone Descriptor Change Notices: Not Supported 00:27:09.506 Discovery Log Change Notices: Supported 00:27:09.506 Controller Attributes 00:27:09.506 128-bit Host Identifier: Not Supported 00:27:09.506 Non-Operational Permissive Mode: Not Supported 00:27:09.506 NVM Sets: Not Supported 00:27:09.506 Read Recovery Levels: Not Supported 00:27:09.506 Endurance Groups: Not Supported 00:27:09.506 Predictable Latency Mode: Not Supported 00:27:09.506 Traffic Based Keep ALive: Not Supported 00:27:09.506 Namespace Granularity: Not Supported 00:27:09.506 SQ Associations: Not Supported 00:27:09.506 UUID List: Not Supported 00:27:09.506 Multi-Domain Subsystem: Not Supported 00:27:09.506 Fixed Capacity Management: Not Supported 00:27:09.506 Variable Capacity Management: Not Supported 00:27:09.506 Delete Endurance Group: Not Supported 00:27:09.506 Delete NVM Set: Not Supported 00:27:09.506 Extended LBA Formats Supported: Not Supported 00:27:09.506 Flexible Data Placement Supported: Not Supported 00:27:09.506 00:27:09.506 Controller Memory Buffer Support 00:27:09.506 ================================ 00:27:09.506 Supported: No 00:27:09.506 00:27:09.506 Persistent Memory Region Support 00:27:09.506 ================================ 00:27:09.506 Supported: No 00:27:09.506 00:27:09.506 Admin Command Set Attributes 00:27:09.506 ============================ 00:27:09.506 Security Send/Receive: Not Supported 00:27:09.506 Format NVM: Not Supported 00:27:09.506 Firmware Activate/Download: Not Supported 00:27:09.506 Namespace Management: Not Supported 00:27:09.506 Device Self-Test: Not Supported 00:27:09.506 Directives: Not Supported 00:27:09.506 NVMe-MI: Not Supported 00:27:09.506 Virtualization Management: Not Supported 00:27:09.506 Doorbell Buffer Config: Not Supported 00:27:09.506 Get LBA Status Capability: Not Supported 00:27:09.506 Command & Feature Lockdown Capability: Not Supported 00:27:09.506 Abort Command Limit: 1 00:27:09.506 Async Event Request Limit: 1 00:27:09.506 Number of Firmware Slots: N/A 00:27:09.506 Firmware Slot 1 Read-Only: N/A 00:27:09.506 Firmware Activation Without Reset: N/A 00:27:09.506 Multiple Update Detection Support: N/A 00:27:09.506 Firmware Update Granularity: No Information Provided 00:27:09.506 Per-Namespace SMART Log: No 00:27:09.506 Asymmetric Namespace Access Log Page: Not Supported 00:27:09.506 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:09.506 Command Effects Log Page: Not Supported 00:27:09.506 Get Log Page Extended Data: Supported 00:27:09.506 Telemetry Log Pages: Not Supported 00:27:09.506 Persistent Event Log Pages: Not Supported 00:27:09.506 Supported Log Pages Log Page: May Support 00:27:09.506 Commands Supported & Effects Log Page: Not Supported 00:27:09.506 Feature Identifiers & Effects Log Page:May Support 00:27:09.506 NVMe-MI Commands & Effects Log Page: May Support 00:27:09.506 Data Area 4 for Telemetry Log: Not Supported 00:27:09.506 Error Log Page Entries Supported: 1 00:27:09.506 Keep Alive: Not Supported 00:27:09.506 00:27:09.506 NVM Command Set Attributes 00:27:09.506 ========================== 00:27:09.506 Submission Queue Entry Size 00:27:09.506 Max: 1 00:27:09.506 Min: 1 00:27:09.506 Completion Queue Entry Size 00:27:09.506 Max: 1 00:27:09.506 Min: 1 00:27:09.506 Number of Namespaces: 0 00:27:09.506 Compare Command: Not Supported 00:27:09.506 Write Uncorrectable Command: Not Supported 00:27:09.506 Dataset Management Command: Not Supported 00:27:09.506 Write Zeroes Command: Not Supported 00:27:09.506 Set Features Save Field: Not Supported 00:27:09.506 Reservations: Not Supported 00:27:09.506 Timestamp: Not Supported 00:27:09.506 Copy: Not Supported 00:27:09.506 Volatile Write Cache: Not Present 00:27:09.506 Atomic Write Unit (Normal): 1 00:27:09.506 Atomic Write Unit (PFail): 1 00:27:09.506 Atomic Compare & Write Unit: 1 00:27:09.506 Fused Compare & Write: Not Supported 00:27:09.506 Scatter-Gather List 00:27:09.506 SGL Command Set: Supported 00:27:09.506 SGL Keyed: Not Supported 00:27:09.506 SGL Bit Bucket Descriptor: Not Supported 00:27:09.506 SGL Metadata Pointer: Not Supported 00:27:09.506 Oversized SGL: Not Supported 00:27:09.506 SGL Metadata Address: Not Supported 00:27:09.506 SGL Offset: Supported 00:27:09.506 Transport SGL Data Block: Not Supported 00:27:09.506 Replay Protected Memory Block: Not Supported 00:27:09.506 00:27:09.506 Firmware Slot Information 00:27:09.506 ========================= 00:27:09.506 Active slot: 0 00:27:09.506 00:27:09.506 00:27:09.506 Error Log 00:27:09.506 ========= 00:27:09.506 00:27:09.506 Active Namespaces 00:27:09.506 ================= 00:27:09.506 Discovery Log Page 00:27:09.506 ================== 00:27:09.506 Generation Counter: 2 00:27:09.506 Number of Records: 2 00:27:09.506 Record Format: 0 00:27:09.506 00:27:09.506 Discovery Log Entry 0 00:27:09.506 ---------------------- 00:27:09.506 Transport Type: 3 (TCP) 00:27:09.506 Address Family: 1 (IPv4) 00:27:09.506 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:09.506 Entry Flags: 00:27:09.506 Duplicate Returned Information: 0 00:27:09.506 Explicit Persistent Connection Support for Discovery: 0 00:27:09.506 Transport Requirements: 00:27:09.506 Secure Channel: Not Specified 00:27:09.506 Port ID: 1 (0x0001) 00:27:09.506 Controller ID: 65535 (0xffff) 00:27:09.506 Admin Max SQ Size: 32 00:27:09.506 Transport Service Identifier: 4420 00:27:09.506 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:09.506 Transport Address: 10.0.0.1 00:27:09.506 Discovery Log Entry 1 00:27:09.506 ---------------------- 00:27:09.506 Transport Type: 3 (TCP) 00:27:09.506 Address Family: 1 (IPv4) 00:27:09.506 Subsystem Type: 2 (NVM Subsystem) 00:27:09.506 Entry Flags: 00:27:09.507 Duplicate Returned Information: 0 00:27:09.507 Explicit Persistent Connection Support for Discovery: 0 00:27:09.507 Transport Requirements: 00:27:09.507 Secure Channel: Not Specified 00:27:09.507 Port ID: 1 (0x0001) 00:27:09.507 Controller ID: 65535 (0xffff) 00:27:09.507 Admin Max SQ Size: 32 00:27:09.507 Transport Service Identifier: 4420 00:27:09.507 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:09.507 Transport Address: 10.0.0.1 00:27:09.507 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:09.507 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.507 get_feature(0x01) failed 00:27:09.507 get_feature(0x02) failed 00:27:09.507 get_feature(0x04) failed 00:27:09.507 ===================================================== 00:27:09.507 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:09.507 ===================================================== 00:27:09.507 Controller Capabilities/Features 00:27:09.507 ================================ 00:27:09.507 Vendor ID: 0000 00:27:09.507 Subsystem Vendor ID: 0000 00:27:09.507 Serial Number: 467f296833c4d95ceb30 00:27:09.507 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:09.507 Firmware Version: 6.7.0-68 00:27:09.507 Recommended Arb Burst: 6 00:27:09.507 IEEE OUI Identifier: 00 00 00 00:27:09.507 Multi-path I/O 00:27:09.507 May have multiple subsystem ports: Yes 00:27:09.507 May have multiple controllers: Yes 00:27:09.507 Associated with SR-IOV VF: No 00:27:09.507 Max Data Transfer Size: Unlimited 00:27:09.507 Max Number of Namespaces: 1024 00:27:09.507 Max Number of I/O Queues: 128 00:27:09.507 NVMe Specification Version (VS): 1.3 00:27:09.507 NVMe Specification Version (Identify): 1.3 00:27:09.507 Maximum Queue Entries: 1024 00:27:09.507 Contiguous Queues Required: No 00:27:09.507 Arbitration Mechanisms Supported 00:27:09.507 Weighted Round Robin: Not Supported 00:27:09.507 Vendor Specific: Not Supported 00:27:09.507 Reset Timeout: 7500 ms 00:27:09.507 Doorbell Stride: 4 bytes 00:27:09.507 NVM Subsystem Reset: Not Supported 00:27:09.507 Command Sets Supported 00:27:09.507 NVM Command Set: Supported 00:27:09.507 Boot Partition: Not Supported 00:27:09.507 Memory Page Size Minimum: 4096 bytes 00:27:09.507 Memory Page Size Maximum: 4096 bytes 00:27:09.507 Persistent Memory Region: Not Supported 00:27:09.507 Optional Asynchronous Events Supported 00:27:09.507 Namespace Attribute Notices: Supported 00:27:09.507 Firmware Activation Notices: Not Supported 00:27:09.507 ANA Change Notices: Supported 00:27:09.507 PLE Aggregate Log Change Notices: Not Supported 00:27:09.507 LBA Status Info Alert Notices: Not Supported 00:27:09.507 EGE Aggregate Log Change Notices: Not Supported 00:27:09.507 Normal NVM Subsystem Shutdown event: Not Supported 00:27:09.507 Zone Descriptor Change Notices: Not Supported 00:27:09.507 Discovery Log Change Notices: Not Supported 00:27:09.507 Controller Attributes 00:27:09.507 128-bit Host Identifier: Supported 00:27:09.507 Non-Operational Permissive Mode: Not Supported 00:27:09.507 NVM Sets: Not Supported 00:27:09.507 Read Recovery Levels: Not Supported 00:27:09.507 Endurance Groups: Not Supported 00:27:09.507 Predictable Latency Mode: Not Supported 00:27:09.507 Traffic Based Keep ALive: Supported 00:27:09.507 Namespace Granularity: Not Supported 00:27:09.507 SQ Associations: Not Supported 00:27:09.507 UUID List: Not Supported 00:27:09.507 Multi-Domain Subsystem: Not Supported 00:27:09.507 Fixed Capacity Management: Not Supported 00:27:09.507 Variable Capacity Management: Not Supported 00:27:09.507 Delete Endurance Group: Not Supported 00:27:09.507 Delete NVM Set: Not Supported 00:27:09.507 Extended LBA Formats Supported: Not Supported 00:27:09.507 Flexible Data Placement Supported: Not Supported 00:27:09.507 00:27:09.507 Controller Memory Buffer Support 00:27:09.507 ================================ 00:27:09.507 Supported: No 00:27:09.507 00:27:09.507 Persistent Memory Region Support 00:27:09.507 ================================ 00:27:09.507 Supported: No 00:27:09.507 00:27:09.507 Admin Command Set Attributes 00:27:09.507 ============================ 00:27:09.507 Security Send/Receive: Not Supported 00:27:09.507 Format NVM: Not Supported 00:27:09.507 Firmware Activate/Download: Not Supported 00:27:09.507 Namespace Management: Not Supported 00:27:09.507 Device Self-Test: Not Supported 00:27:09.507 Directives: Not Supported 00:27:09.507 NVMe-MI: Not Supported 00:27:09.507 Virtualization Management: Not Supported 00:27:09.507 Doorbell Buffer Config: Not Supported 00:27:09.507 Get LBA Status Capability: Not Supported 00:27:09.507 Command & Feature Lockdown Capability: Not Supported 00:27:09.507 Abort Command Limit: 4 00:27:09.507 Async Event Request Limit: 4 00:27:09.507 Number of Firmware Slots: N/A 00:27:09.507 Firmware Slot 1 Read-Only: N/A 00:27:09.507 Firmware Activation Without Reset: N/A 00:27:09.507 Multiple Update Detection Support: N/A 00:27:09.507 Firmware Update Granularity: No Information Provided 00:27:09.507 Per-Namespace SMART Log: Yes 00:27:09.507 Asymmetric Namespace Access Log Page: Supported 00:27:09.507 ANA Transition Time : 10 sec 00:27:09.507 00:27:09.507 Asymmetric Namespace Access Capabilities 00:27:09.507 ANA Optimized State : Supported 00:27:09.507 ANA Non-Optimized State : Supported 00:27:09.507 ANA Inaccessible State : Supported 00:27:09.507 ANA Persistent Loss State : Supported 00:27:09.507 ANA Change State : Supported 00:27:09.507 ANAGRPID is not changed : No 00:27:09.507 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:09.507 00:27:09.507 ANA Group Identifier Maximum : 128 00:27:09.507 Number of ANA Group Identifiers : 128 00:27:09.507 Max Number of Allowed Namespaces : 1024 00:27:09.507 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:09.507 Command Effects Log Page: Supported 00:27:09.507 Get Log Page Extended Data: Supported 00:27:09.507 Telemetry Log Pages: Not Supported 00:27:09.507 Persistent Event Log Pages: Not Supported 00:27:09.507 Supported Log Pages Log Page: May Support 00:27:09.507 Commands Supported & Effects Log Page: Not Supported 00:27:09.507 Feature Identifiers & Effects Log Page:May Support 00:27:09.507 NVMe-MI Commands & Effects Log Page: May Support 00:27:09.507 Data Area 4 for Telemetry Log: Not Supported 00:27:09.507 Error Log Page Entries Supported: 128 00:27:09.507 Keep Alive: Supported 00:27:09.507 Keep Alive Granularity: 1000 ms 00:27:09.507 00:27:09.507 NVM Command Set Attributes 00:27:09.507 ========================== 00:27:09.507 Submission Queue Entry Size 00:27:09.507 Max: 64 00:27:09.507 Min: 64 00:27:09.507 Completion Queue Entry Size 00:27:09.507 Max: 16 00:27:09.507 Min: 16 00:27:09.507 Number of Namespaces: 1024 00:27:09.507 Compare Command: Not Supported 00:27:09.507 Write Uncorrectable Command: Not Supported 00:27:09.507 Dataset Management Command: Supported 00:27:09.507 Write Zeroes Command: Supported 00:27:09.507 Set Features Save Field: Not Supported 00:27:09.507 Reservations: Not Supported 00:27:09.507 Timestamp: Not Supported 00:27:09.507 Copy: Not Supported 00:27:09.507 Volatile Write Cache: Present 00:27:09.507 Atomic Write Unit (Normal): 1 00:27:09.507 Atomic Write Unit (PFail): 1 00:27:09.507 Atomic Compare & Write Unit: 1 00:27:09.507 Fused Compare & Write: Not Supported 00:27:09.507 Scatter-Gather List 00:27:09.507 SGL Command Set: Supported 00:27:09.507 SGL Keyed: Not Supported 00:27:09.507 SGL Bit Bucket Descriptor: Not Supported 00:27:09.507 SGL Metadata Pointer: Not Supported 00:27:09.507 Oversized SGL: Not Supported 00:27:09.507 SGL Metadata Address: Not Supported 00:27:09.507 SGL Offset: Supported 00:27:09.507 Transport SGL Data Block: Not Supported 00:27:09.507 Replay Protected Memory Block: Not Supported 00:27:09.507 00:27:09.507 Firmware Slot Information 00:27:09.507 ========================= 00:27:09.507 Active slot: 0 00:27:09.507 00:27:09.507 Asymmetric Namespace Access 00:27:09.507 =========================== 00:27:09.507 Change Count : 0 00:27:09.507 Number of ANA Group Descriptors : 1 00:27:09.507 ANA Group Descriptor : 0 00:27:09.507 ANA Group ID : 1 00:27:09.507 Number of NSID Values : 1 00:27:09.507 Change Count : 0 00:27:09.507 ANA State : 1 00:27:09.507 Namespace Identifier : 1 00:27:09.507 00:27:09.507 Commands Supported and Effects 00:27:09.507 ============================== 00:27:09.507 Admin Commands 00:27:09.507 -------------- 00:27:09.507 Get Log Page (02h): Supported 00:27:09.507 Identify (06h): Supported 00:27:09.507 Abort (08h): Supported 00:27:09.507 Set Features (09h): Supported 00:27:09.507 Get Features (0Ah): Supported 00:27:09.507 Asynchronous Event Request (0Ch): Supported 00:27:09.507 Keep Alive (18h): Supported 00:27:09.507 I/O Commands 00:27:09.507 ------------ 00:27:09.507 Flush (00h): Supported 00:27:09.507 Write (01h): Supported LBA-Change 00:27:09.507 Read (02h): Supported 00:27:09.507 Write Zeroes (08h): Supported LBA-Change 00:27:09.507 Dataset Management (09h): Supported 00:27:09.507 00:27:09.507 Error Log 00:27:09.508 ========= 00:27:09.508 Entry: 0 00:27:09.508 Error Count: 0x3 00:27:09.508 Submission Queue Id: 0x0 00:27:09.508 Command Id: 0x5 00:27:09.508 Phase Bit: 0 00:27:09.508 Status Code: 0x2 00:27:09.508 Status Code Type: 0x0 00:27:09.508 Do Not Retry: 1 00:27:09.508 Error Location: 0x28 00:27:09.508 LBA: 0x0 00:27:09.508 Namespace: 0x0 00:27:09.508 Vendor Log Page: 0x0 00:27:09.508 ----------- 00:27:09.508 Entry: 1 00:27:09.508 Error Count: 0x2 00:27:09.508 Submission Queue Id: 0x0 00:27:09.508 Command Id: 0x5 00:27:09.508 Phase Bit: 0 00:27:09.508 Status Code: 0x2 00:27:09.508 Status Code Type: 0x0 00:27:09.508 Do Not Retry: 1 00:27:09.508 Error Location: 0x28 00:27:09.508 LBA: 0x0 00:27:09.508 Namespace: 0x0 00:27:09.508 Vendor Log Page: 0x0 00:27:09.508 ----------- 00:27:09.508 Entry: 2 00:27:09.508 Error Count: 0x1 00:27:09.508 Submission Queue Id: 0x0 00:27:09.508 Command Id: 0x4 00:27:09.508 Phase Bit: 0 00:27:09.508 Status Code: 0x2 00:27:09.508 Status Code Type: 0x0 00:27:09.508 Do Not Retry: 1 00:27:09.508 Error Location: 0x28 00:27:09.508 LBA: 0x0 00:27:09.508 Namespace: 0x0 00:27:09.508 Vendor Log Page: 0x0 00:27:09.508 00:27:09.508 Number of Queues 00:27:09.508 ================ 00:27:09.508 Number of I/O Submission Queues: 128 00:27:09.508 Number of I/O Completion Queues: 128 00:27:09.508 00:27:09.508 ZNS Specific Controller Data 00:27:09.508 ============================ 00:27:09.508 Zone Append Size Limit: 0 00:27:09.508 00:27:09.508 00:27:09.508 Active Namespaces 00:27:09.508 ================= 00:27:09.508 get_feature(0x05) failed 00:27:09.508 Namespace ID:1 00:27:09.508 Command Set Identifier: NVM (00h) 00:27:09.508 Deallocate: Supported 00:27:09.508 Deallocated/Unwritten Error: Not Supported 00:27:09.508 Deallocated Read Value: Unknown 00:27:09.508 Deallocate in Write Zeroes: Not Supported 00:27:09.508 Deallocated Guard Field: 0xFFFF 00:27:09.508 Flush: Supported 00:27:09.508 Reservation: Not Supported 00:27:09.508 Namespace Sharing Capabilities: Multiple Controllers 00:27:09.508 Size (in LBAs): 3750748848 (1788GiB) 00:27:09.508 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:09.508 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:09.508 UUID: 416294dd-6b83-42bf-88b1-f38a50b90b94 00:27:09.508 Thin Provisioning: Not Supported 00:27:09.508 Per-NS Atomic Units: Yes 00:27:09.508 Atomic Write Unit (Normal): 8 00:27:09.508 Atomic Write Unit (PFail): 8 00:27:09.508 Preferred Write Granularity: 8 00:27:09.508 Atomic Compare & Write Unit: 8 00:27:09.508 Atomic Boundary Size (Normal): 0 00:27:09.508 Atomic Boundary Size (PFail): 0 00:27:09.508 Atomic Boundary Offset: 0 00:27:09.508 NGUID/EUI64 Never Reused: No 00:27:09.508 ANA group ID: 1 00:27:09.508 Namespace Write Protected: No 00:27:09.508 Number of LBA Formats: 1 00:27:09.508 Current LBA Format: LBA Format #00 00:27:09.508 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:09.508 00:27:09.508 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:09.508 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:09.508 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:09.508 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:09.508 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:09.508 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:09.508 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:09.508 rmmod nvme_tcp 00:27:09.508 rmmod nvme_fabrics 00:27:09.508 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:09.508 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:09.508 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:09.508 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:09.508 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:09.508 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:09.508 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:09.508 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:09.508 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:09.508 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.508 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:09.508 13:57:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.048 13:57:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:12.048 13:57:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:12.048 13:57:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:12.049 13:57:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:12.049 13:57:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:12.049 13:57:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:12.049 13:57:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:12.049 13:57:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:12.049 13:57:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:12.049 13:57:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:12.049 13:57:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:15.351 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:15.351 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:15.351 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:15.351 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:15.351 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:15.351 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:15.351 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:15.351 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:15.351 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:15.351 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:15.351 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:15.351 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:15.351 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:15.351 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:15.351 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:15.351 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:15.351 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:15.351 00:27:15.352 real 0m18.567s 00:27:15.352 user 0m5.017s 00:27:15.352 sys 0m10.522s 00:27:15.352 13:57:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:15.352 13:57:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:15.352 ************************************ 00:27:15.352 END TEST nvmf_identify_kernel_target 00:27:15.352 ************************************ 00:27:15.352 13:57:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:15.352 13:57:41 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:15.352 13:57:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:15.352 13:57:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:15.352 13:57:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:15.613 ************************************ 00:27:15.613 START TEST nvmf_auth_host 00:27:15.613 ************************************ 00:27:15.613 13:57:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:15.613 * Looking for test storage... 00:27:15.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:15.613 13:57:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.759 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.759 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:23.759 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:23.759 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:23.759 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:23.760 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:23.760 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:23.760 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:23.760 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:23.760 13:57:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:23.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:23.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:27:23.760 00:27:23.760 --- 10.0.0.2 ping statistics --- 00:27:23.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.760 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:23.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:23.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:27:23.760 00:27:23.760 --- 10.0.0.1 ping statistics --- 00:27:23.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.760 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1243109 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1243109 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1243109 ']' 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.760 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:23.761 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:23.761 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:23.761 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:23.761 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=197ad49b2e3023b00e956868707ed9b0 00:27:23.761 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:23.761 13:57:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.szI 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 197ad49b2e3023b00e956868707ed9b0 0 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 197ad49b2e3023b00e956868707ed9b0 0 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=197ad49b2e3023b00e956868707ed9b0 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.szI 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.szI 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.szI 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8a5f6c35b6b4bc23935a3885ceb570acc387582508afe0be4a95a74f3649621c 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Ebo 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8a5f6c35b6b4bc23935a3885ceb570acc387582508afe0be4a95a74f3649621c 3 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8a5f6c35b6b4bc23935a3885ceb570acc387582508afe0be4a95a74f3649621c 3 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8a5f6c35b6b4bc23935a3885ceb570acc387582508afe0be4a95a74f3649621c 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Ebo 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Ebo 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Ebo 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=af0e401b28f802c913501d441947df0f5476ddc97d196e75 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.pNX 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key af0e401b28f802c913501d441947df0f5476ddc97d196e75 0 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 af0e401b28f802c913501d441947df0f5476ddc97d196e75 0 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=af0e401b28f802c913501d441947df0f5476ddc97d196e75 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.pNX 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.pNX 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.pNX 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ee6e761600a79ea0cd43d2f3b6617563019b27888aaa7dcf 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.eP0 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ee6e761600a79ea0cd43d2f3b6617563019b27888aaa7dcf 2 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ee6e761600a79ea0cd43d2f3b6617563019b27888aaa7dcf 2 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ee6e761600a79ea0cd43d2f3b6617563019b27888aaa7dcf 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.eP0 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.eP0 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.eP0 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1c17c7bf9d46d051af9a6ea4d8d7487b 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.t9P 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1c17c7bf9d46d051af9a6ea4d8d7487b 1 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1c17c7bf9d46d051af9a6ea4d8d7487b 1 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1c17c7bf9d46d051af9a6ea4d8d7487b 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:23.761 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.t9P 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.t9P 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.t9P 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e77d11119963442115dd2bbbce9692c1 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.mmg 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e77d11119963442115dd2bbbce9692c1 1 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e77d11119963442115dd2bbbce9692c1 1 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e77d11119963442115dd2bbbce9692c1 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.mmg 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.mmg 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.mmg 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1257b26d770e5b62d66167b350b09d7f4979181109adb08b 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.lCn 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1257b26d770e5b62d66167b350b09d7f4979181109adb08b 2 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1257b26d770e5b62d66167b350b09d7f4979181109adb08b 2 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1257b26d770e5b62d66167b350b09d7f4979181109adb08b 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.lCn 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.lCn 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.lCn 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=36ead22269d38b9efa13ca9ab1990497 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fef 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 36ead22269d38b9efa13ca9ab1990497 0 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 36ead22269d38b9efa13ca9ab1990497 0 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=36ead22269d38b9efa13ca9ab1990497 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fef 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fef 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.fef 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:24.022 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=95cab4d23658fac1823d6afa792457b4ee9ce16df7609f2558357bb9d89cae46 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.mST 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 95cab4d23658fac1823d6afa792457b4ee9ce16df7609f2558357bb9d89cae46 3 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 95cab4d23658fac1823d6afa792457b4ee9ce16df7609f2558357bb9d89cae46 3 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=95cab4d23658fac1823d6afa792457b4ee9ce16df7609f2558357bb9d89cae46 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.mST 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.mST 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.mST 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1243109 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1243109 ']' 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:24.023 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.szI 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Ebo ]] 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ebo 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.pNX 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.eP0 ]] 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eP0 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.t9P 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.mmg ]] 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mmg 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.lCn 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.fef ]] 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.fef 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.mST 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.283 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:24.544 13:57:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:27.856 Waiting for block devices as requested 00:27:27.856 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:27.856 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:27.856 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:27.856 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:27.856 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:28.119 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:28.119 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:28.119 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:28.413 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:28.414 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:28.674 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:28.674 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:28.674 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:28.674 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:28.934 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:28.934 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:28.934 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:29.884 No valid GPT data, bailing 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:29.884 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:30.144 00:27:30.144 Discovery Log Number of Records 2, Generation counter 2 00:27:30.144 =====Discovery Log Entry 0====== 00:27:30.144 trtype: tcp 00:27:30.144 adrfam: ipv4 00:27:30.144 subtype: current discovery subsystem 00:27:30.144 treq: not specified, sq flow control disable supported 00:27:30.144 portid: 1 00:27:30.144 trsvcid: 4420 00:27:30.144 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:30.144 traddr: 10.0.0.1 00:27:30.144 eflags: none 00:27:30.144 sectype: none 00:27:30.144 =====Discovery Log Entry 1====== 00:27:30.144 trtype: tcp 00:27:30.144 adrfam: ipv4 00:27:30.144 subtype: nvme subsystem 00:27:30.144 treq: not specified, sq flow control disable supported 00:27:30.144 portid: 1 00:27:30.144 trsvcid: 4420 00:27:30.145 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:30.145 traddr: 10.0.0.1 00:27:30.145 eflags: none 00:27:30.145 sectype: none 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: ]] 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.145 nvme0n1 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.145 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: ]] 00:27:30.405 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.406 nvme0n1 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.406 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: ]] 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.666 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.667 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.667 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.667 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.667 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.667 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.667 13:57:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.667 13:57:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.667 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.667 13:57:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.667 nvme0n1 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: ]] 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.667 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.927 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.927 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.927 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.928 nvme0n1 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: ]] 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.928 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.192 nvme0n1 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.192 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.453 nvme0n1 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: ]] 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.453 13:57:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.715 nvme0n1 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: ]] 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.715 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.976 nvme0n1 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: ]] 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.976 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.236 nvme0n1 00:27:32.236 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.236 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.236 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.236 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.236 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.236 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.236 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.236 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.236 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.236 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.236 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.236 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.236 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:32.236 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.236 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.236 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.236 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: ]] 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.237 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.497 nvme0n1 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.497 13:57:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.498 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.498 13:57:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.758 nvme0n1 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: ]] 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.758 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.329 nvme0n1 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: ]] 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.329 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.590 nvme0n1 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: ]] 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.590 13:57:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.851 nvme0n1 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: ]] 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.851 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.421 nvme0n1 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.421 13:58:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.681 nvme0n1 00:27:34.681 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.681 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.681 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.681 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.681 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.681 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.681 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.681 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.681 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.681 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.681 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.681 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:34.681 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.681 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:34.681 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.681 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.681 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.681 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: ]] 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.682 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.272 nvme0n1 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: ]] 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.272 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.273 13:58:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.844 nvme0n1 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: ]] 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.844 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.416 nvme0n1 00:27:36.416 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.416 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.416 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.416 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.416 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.416 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.416 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.416 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.416 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.416 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.416 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.416 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.416 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: ]] 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.417 13:58:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.677 nvme0n1 00:27:36.677 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.938 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.509 nvme0n1 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: ]] 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.509 13:58:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.080 nvme0n1 00:27:38.080 13:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.080 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.080 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.080 13:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.080 13:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.080 13:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: ]] 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.341 13:58:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.911 nvme0n1 00:27:38.911 13:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.911 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.911 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.911 13:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.911 13:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: ]] 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.171 13:58:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.742 nvme0n1 00:27:39.742 13:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.742 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.742 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.742 13:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.742 13:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.742 13:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.742 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.742 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.742 13:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.742 13:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.002 13:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.002 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.002 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: ]] 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.003 13:58:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.572 nvme0n1 00:27:40.572 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.572 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.572 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.572 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.572 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.572 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.832 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.403 nvme0n1 00:27:41.403 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.403 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.403 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.403 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.403 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.403 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.663 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.663 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.663 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.663 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.663 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.663 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:41.663 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.663 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.663 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:41.663 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.663 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.663 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.663 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:41.663 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: ]] 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.664 13:58:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.664 nvme0n1 00:27:41.664 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.664 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.664 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.664 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.664 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.664 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.664 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.664 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.664 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.664 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: ]] 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.962 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.963 nvme0n1 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: ]] 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.963 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.223 nvme0n1 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: ]] 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.223 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.224 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.224 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.224 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.224 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.224 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.224 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.484 nvme0n1 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.484 13:58:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.745 nvme0n1 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: ]] 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.745 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.006 nvme0n1 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: ]] 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.006 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.268 nvme0n1 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: ]] 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.268 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.269 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.269 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.269 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.269 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.269 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.269 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.529 nvme0n1 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: ]] 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.529 13:58:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.790 nvme0n1 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.790 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.051 nvme0n1 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: ]] 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.051 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.315 nvme0n1 00:27:44.315 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.315 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.315 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.315 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.315 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.315 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: ]] 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.578 13:58:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.839 nvme0n1 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: ]] 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.839 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.100 nvme0n1 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: ]] 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.100 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.360 nvme0n1 00:27:45.360 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.622 13:58:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.623 13:58:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.623 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.623 13:58:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.885 nvme0n1 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: ]] 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.885 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.472 nvme0n1 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: ]] 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.472 13:58:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.044 nvme0n1 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: ]] 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.044 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.616 nvme0n1 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: ]] 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.616 13:58:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.186 nvme0n1 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.186 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:48.187 13:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.187 13:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.187 13:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.187 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.187 13:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.187 13:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.187 13:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.187 13:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.187 13:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.187 13:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.187 13:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.187 13:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.187 13:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.187 13:58:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.187 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:48.187 13:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.187 13:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.758 nvme0n1 00:27:48.758 13:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.758 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.758 13:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.758 13:58:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.758 13:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.758 13:58:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: ]] 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.758 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.329 nvme0n1 00:27:49.329 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.329 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.329 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.329 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.329 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.329 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: ]] 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.590 13:58:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.165 nvme0n1 00:27:50.165 13:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.165 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.165 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.165 13:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.165 13:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.165 13:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.165 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.425 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.425 13:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.425 13:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.425 13:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.425 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.425 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:50.425 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: ]] 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.426 13:58:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.996 nvme0n1 00:27:50.996 13:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.996 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.996 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.996 13:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.996 13:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.996 13:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: ]] 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.255 13:58:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.823 nvme0n1 00:27:51.823 13:58:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.823 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.823 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.823 13:58:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.823 13:58:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.823 13:58:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.083 13:58:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.652 nvme0n1 00:27:52.652 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.652 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.652 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.652 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.652 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.652 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.912 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.912 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.912 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.912 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.912 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.912 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:52.912 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:52.912 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.912 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:52.912 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.912 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.912 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.912 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:52.912 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:52.912 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:52.912 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.912 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: ]] 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.913 nvme0n1 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.913 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.173 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.173 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.173 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:53.173 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.173 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.173 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.173 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.173 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:53.173 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:53.173 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.173 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.173 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: ]] 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.174 nvme0n1 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: ]] 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.174 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.434 nvme0n1 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: ]] 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.434 13:58:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.694 nvme0n1 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:53.694 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.695 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.955 nvme0n1 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: ]] 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.955 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.215 nvme0n1 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: ]] 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:54.215 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.216 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.476 nvme0n1 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: ]] 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.476 13:58:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.736 nvme0n1 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: ]] 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:54.736 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.737 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.997 nvme0n1 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.997 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.279 nvme0n1 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: ]] 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.279 13:58:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.539 nvme0n1 00:27:55.539 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.539 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.539 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.539 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.539 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.539 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: ]] 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.800 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.060 nvme0n1 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: ]] 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.060 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.357 nvme0n1 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: ]] 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.357 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.358 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.358 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.358 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.358 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.358 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.358 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.358 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.358 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.358 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.358 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.358 13:58:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.358 13:58:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.358 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.358 13:58:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.617 nvme0n1 00:27:56.618 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.618 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.618 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.618 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.618 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.618 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.879 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.140 nvme0n1 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: ]] 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.140 13:58:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.710 nvme0n1 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: ]] 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:57.710 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:57.711 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.711 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:57.711 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.711 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.711 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.711 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.711 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.711 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.711 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.711 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.711 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.711 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.711 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.711 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.711 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.711 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.711 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:57.711 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.711 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.282 nvme0n1 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: ]] 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.282 13:58:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.906 nvme0n1 00:27:58.906 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.906 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.906 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.906 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.906 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: ]] 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.907 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.167 nvme0n1 00:27:59.167 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.167 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.167 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.168 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.428 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.428 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.428 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.428 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.428 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.428 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.428 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.428 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.428 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.428 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.428 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.428 13:58:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.428 13:58:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:59.428 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.428 13:58:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.688 nvme0n1 00:27:59.688 13:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.688 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.688 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.688 13:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.688 13:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.688 13:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3YWQ0OWIyZTMwMjNiMDBlOTU2ODY4NzA3ZWQ5YjChhZuQ: 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: ]] 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGE1ZjZjMzViNmI0YmMyMzkzNWEzODg1Y2ViNTcwYWNjMzg3NTgyNTA4YWZlMGJlNGE5NWE3NGYzNjQ5NjIxY+ugaog=: 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.948 13:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.949 13:58:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.949 13:58:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:59.949 13:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.949 13:58:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.520 nvme0n1 00:28:00.520 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.520 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.520 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.520 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.520 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.520 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.781 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.781 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.781 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.781 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.781 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.781 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.781 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:00.781 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.781 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.781 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.781 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.781 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:28:00.781 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:28:00.781 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.781 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:00.781 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:28:00.781 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: ]] 00:28:00.781 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.782 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.353 nvme0n1 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWMxN2M3YmY5ZDQ2ZDA1MWFmOWE2ZWE0ZDhkNzQ4N2IDb6js: 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: ]] 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTc3ZDExMTE5OTYzNDQyMTE1ZGQyYmJiY2U5NjkyYzGbxIhm: 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:01.353 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:01.354 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.354 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:01.354 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.354 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.354 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.354 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.354 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.354 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.354 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.354 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.354 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.354 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.354 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.354 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.354 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.354 13:58:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.354 13:58:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:01.354 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.615 13:58:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.186 nvme0n1 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTI1N2IyNmQ3NzBlNWI2MmQ2NjE2N2IzNTBiMDlkN2Y0OTc5MTgxMTA5YWRiMDhipqQmFw==: 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: ]] 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZlYWQyMjI2OWQzOGI5ZWZhMTNjYTlhYjE5OTA0OTeDP+L7: 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.186 13:58:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.446 13:58:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:02.446 13:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.446 13:58:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.017 nvme0n1 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTVjYWI0ZDIzNjU4ZmFjMTgyM2Q2YWZhNzkyNDU3YjRlZTljZTE2ZGY3NjA5ZjI1NTgzNTdiYjlkODljYWU0NrAoCts=: 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.017 13:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.277 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.277 13:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.277 13:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.277 13:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.277 13:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.277 13:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.278 13:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.278 13:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.278 13:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.278 13:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.278 13:58:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.278 13:58:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:03.278 13:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.278 13:58:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.850 nvme0n1 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWYwZTQwMWIyOGY4MDJjOTEzNTAxZDQ0MTk0N2RmMGY1NDc2ZGRjOTdkMTk2ZTc1lYvw9g==: 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: ]] 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWU2ZTc2MTYwMGE3OWVhMGNkNDNkMmYzYjY2MTc1NjMwMTliMjc4ODhhYWE3ZGNm2OKvvg==: 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.850 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.111 request: 00:28:04.111 { 00:28:04.111 "name": "nvme0", 00:28:04.111 "trtype": "tcp", 00:28:04.111 "traddr": "10.0.0.1", 00:28:04.111 "adrfam": "ipv4", 00:28:04.111 "trsvcid": "4420", 00:28:04.111 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:04.111 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:04.111 "prchk_reftag": false, 00:28:04.111 "prchk_guard": false, 00:28:04.111 "hdgst": false, 00:28:04.111 "ddgst": false, 00:28:04.111 "method": "bdev_nvme_attach_controller", 00:28:04.111 "req_id": 1 00:28:04.111 } 00:28:04.111 Got JSON-RPC error response 00:28:04.111 response: 00:28:04.111 { 00:28:04.111 "code": -5, 00:28:04.111 "message": "Input/output error" 00:28:04.111 } 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.111 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.112 request: 00:28:04.112 { 00:28:04.112 "name": "nvme0", 00:28:04.112 "trtype": "tcp", 00:28:04.112 "traddr": "10.0.0.1", 00:28:04.112 "adrfam": "ipv4", 00:28:04.112 "trsvcid": "4420", 00:28:04.112 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:04.112 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:04.112 "prchk_reftag": false, 00:28:04.112 "prchk_guard": false, 00:28:04.112 "hdgst": false, 00:28:04.112 "ddgst": false, 00:28:04.112 "dhchap_key": "key2", 00:28:04.112 "method": "bdev_nvme_attach_controller", 00:28:04.112 "req_id": 1 00:28:04.112 } 00:28:04.112 Got JSON-RPC error response 00:28:04.112 response: 00:28:04.112 { 00:28:04.112 "code": -5, 00:28:04.112 "message": "Input/output error" 00:28:04.112 } 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:04.112 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.373 request: 00:28:04.373 { 00:28:04.373 "name": "nvme0", 00:28:04.373 "trtype": "tcp", 00:28:04.373 "traddr": "10.0.0.1", 00:28:04.373 "adrfam": "ipv4", 00:28:04.373 "trsvcid": "4420", 00:28:04.373 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:04.373 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:04.373 "prchk_reftag": false, 00:28:04.373 "prchk_guard": false, 00:28:04.373 "hdgst": false, 00:28:04.373 "ddgst": false, 00:28:04.373 "dhchap_key": "key1", 00:28:04.373 "dhchap_ctrlr_key": "ckey2", 00:28:04.373 "method": "bdev_nvme_attach_controller", 00:28:04.373 "req_id": 1 00:28:04.373 } 00:28:04.373 Got JSON-RPC error response 00:28:04.373 response: 00:28:04.373 { 00:28:04.373 "code": -5, 00:28:04.373 "message": "Input/output error" 00:28:04.373 } 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:04.373 rmmod nvme_tcp 00:28:04.373 rmmod nvme_fabrics 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1243109 ']' 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1243109 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1243109 ']' 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1243109 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1243109 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1243109' 00:28:04.373 killing process with pid 1243109 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1243109 00:28:04.373 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1243109 00:28:04.634 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:04.634 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:04.634 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:04.634 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:04.634 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:04.634 13:58:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.634 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:04.634 13:58:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.548 13:58:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:06.548 13:58:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:06.548 13:58:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:06.548 13:58:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:06.548 13:58:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:06.548 13:58:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:06.548 13:58:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:06.548 13:58:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:06.548 13:58:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:06.548 13:58:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:06.548 13:58:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:06.548 13:58:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:06.548 13:58:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:10.757 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:10.757 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:10.757 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:10.757 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:10.757 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:10.757 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:10.757 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:10.757 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:10.757 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:10.757 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:10.757 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:10.757 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:10.757 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:10.757 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:10.757 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:10.757 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:10.757 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:10.757 13:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.szI /tmp/spdk.key-null.pNX /tmp/spdk.key-sha256.t9P /tmp/spdk.key-sha384.lCn /tmp/spdk.key-sha512.mST /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:10.757 13:58:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:14.063 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:14.063 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:14.063 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:14.064 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:14.064 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:14.064 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:14.064 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:14.064 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:14.064 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:14.064 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:14.064 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:14.064 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:14.064 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:14.064 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:14.064 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:14.064 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:14.064 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:14.325 00:28:14.325 real 0m58.701s 00:28:14.325 user 0m52.570s 00:28:14.325 sys 0m15.050s 00:28:14.325 13:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:14.325 13:58:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.325 ************************************ 00:28:14.325 END TEST nvmf_auth_host 00:28:14.325 ************************************ 00:28:14.325 13:58:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:14.325 13:58:40 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:28:14.325 13:58:40 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:14.325 13:58:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:14.325 13:58:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:14.325 13:58:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:14.325 ************************************ 00:28:14.325 START TEST nvmf_digest 00:28:14.325 ************************************ 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:14.325 * Looking for test storage... 00:28:14.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:14.325 13:58:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:22.556 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.556 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:22.556 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:22.556 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:22.556 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:22.556 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:22.556 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:22.556 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:22.556 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:22.556 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:22.556 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:22.556 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:22.557 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:22.557 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:22.557 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:22.557 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:22.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:28:22.557 00:28:22.557 --- 10.0.0.2 ping statistics --- 00:28:22.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.557 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.442 ms 00:28:22.557 00:28:22.557 --- 10.0.0.1 ping statistics --- 00:28:22.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.557 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:22.557 13:58:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:22.557 ************************************ 00:28:22.557 START TEST nvmf_digest_clean 00:28:22.557 ************************************ 00:28:22.557 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:28:22.557 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:22.557 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:22.557 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:22.557 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:22.557 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:22.557 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:22.557 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:22.557 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.557 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1259758 00:28:22.557 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1259758 00:28:22.557 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1259758 ']' 00:28:22.557 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:22.557 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.557 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:22.557 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.557 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:22.557 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.557 [2024-07-15 13:58:48.073344] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:22.557 [2024-07-15 13:58:48.073402] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.558 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.558 [2024-07-15 13:58:48.144936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.558 [2024-07-15 13:58:48.219017] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.558 [2024-07-15 13:58:48.219054] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.558 [2024-07-15 13:58:48.219062] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.558 [2024-07-15 13:58:48.219068] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.558 [2024-07-15 13:58:48.219074] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.558 [2024-07-15 13:58:48.219098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.558 null0 00:28:22.558 [2024-07-15 13:58:48.957916] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.558 [2024-07-15 13:58:48.982091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1259955 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1259955 /var/tmp/bperf.sock 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1259955 ']' 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:22.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:22.558 13:58:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.558 [2024-07-15 13:58:49.034963] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:22.558 [2024-07-15 13:58:49.035010] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259955 ] 00:28:22.558 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.818 [2024-07-15 13:58:49.108912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.818 [2024-07-15 13:58:49.172817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.389 13:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:23.389 13:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:23.389 13:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:23.389 13:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:23.389 13:58:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:23.650 13:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.650 13:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.920 nvme0n1 00:28:23.920 13:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:23.920 13:58:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:23.920 Running I/O for 2 seconds... 00:28:25.829 00:28:25.829 Latency(us) 00:28:25.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.829 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:25.829 nvme0n1 : 2.00 20382.44 79.62 0.00 0.00 6271.45 2880.85 12178.77 00:28:25.829 =================================================================================================================== 00:28:25.829 Total : 20382.44 79.62 0.00 0.00 6271.45 2880.85 12178.77 00:28:25.829 0 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:26.090 | select(.opcode=="crc32c") 00:28:26.090 | "\(.module_name) \(.executed)"' 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1259955 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1259955 ']' 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1259955 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1259955 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1259955' 00:28:26.090 killing process with pid 1259955 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1259955 00:28:26.090 Received shutdown signal, test time was about 2.000000 seconds 00:28:26.090 00:28:26.090 Latency(us) 00:28:26.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.090 =================================================================================================================== 00:28:26.090 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:26.090 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1259955 00:28:26.351 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:26.351 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:26.351 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:26.351 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:26.351 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:26.351 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:26.351 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:26.351 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1260638 00:28:26.351 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1260638 /var/tmp/bperf.sock 00:28:26.351 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1260638 ']' 00:28:26.351 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:26.351 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:26.351 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:26.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:26.351 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:26.351 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:26.351 13:58:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:26.351 [2024-07-15 13:58:52.736842] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:26.351 [2024-07-15 13:58:52.736897] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260638 ] 00:28:26.351 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:26.351 Zero copy mechanism will not be used. 00:28:26.351 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.351 [2024-07-15 13:58:52.807999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.351 [2024-07-15 13:58:52.861201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.292 13:58:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:27.292 13:58:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:27.292 13:58:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:27.292 13:58:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:27.292 13:58:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:27.292 13:58:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.292 13:58:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.552 nvme0n1 00:28:27.552 13:58:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:27.552 13:58:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:27.552 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:27.552 Zero copy mechanism will not be used. 00:28:27.552 Running I/O for 2 seconds... 00:28:30.096 00:28:30.096 Latency(us) 00:28:30.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.096 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:30.096 nvme0n1 : 2.01 2305.68 288.21 0.00 0.00 6935.90 2293.76 10758.83 00:28:30.096 =================================================================================================================== 00:28:30.096 Total : 2305.68 288.21 0.00 0.00 6935.90 2293.76 10758.83 00:28:30.096 0 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:30.096 | select(.opcode=="crc32c") 00:28:30.096 | "\(.module_name) \(.executed)"' 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1260638 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1260638 ']' 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1260638 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1260638 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1260638' 00:28:30.096 killing process with pid 1260638 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1260638 00:28:30.096 Received shutdown signal, test time was about 2.000000 seconds 00:28:30.096 00:28:30.096 Latency(us) 00:28:30.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.096 =================================================================================================================== 00:28:30.096 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1260638 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1261327 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1261327 /var/tmp/bperf.sock 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1261327 ']' 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:30.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:30.096 13:58:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:30.096 [2024-07-15 13:58:56.435398] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:30.096 [2024-07-15 13:58:56.435455] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261327 ] 00:28:30.096 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.096 [2024-07-15 13:58:56.511262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.096 [2024-07-15 13:58:56.564706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.037 13:58:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:31.037 13:58:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:31.037 13:58:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:31.037 13:58:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:31.037 13:58:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:31.037 13:58:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:31.037 13:58:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:31.297 nvme0n1 00:28:31.297 13:58:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:31.297 13:58:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:31.297 Running I/O for 2 seconds... 00:28:33.842 00:28:33.842 Latency(us) 00:28:33.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.842 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:33.842 nvme0n1 : 2.01 21208.95 82.85 0.00 0.00 6027.09 3822.93 12178.77 00:28:33.842 =================================================================================================================== 00:28:33.842 Total : 21208.95 82.85 0.00 0.00 6027.09 3822.93 12178.77 00:28:33.842 0 00:28:33.842 13:58:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:33.842 13:58:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:33.842 13:58:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:33.842 13:58:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:33.842 | select(.opcode=="crc32c") 00:28:33.842 | "\(.module_name) \(.executed)"' 00:28:33.842 13:58:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:33.842 13:58:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:33.842 13:58:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:33.842 13:58:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:33.842 13:58:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:33.842 13:58:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1261327 00:28:33.842 13:58:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1261327 ']' 00:28:33.842 13:58:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1261327 00:28:33.842 13:58:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:33.842 13:58:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:33.842 13:58:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1261327 00:28:33.842 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:33.842 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:33.842 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1261327' 00:28:33.842 killing process with pid 1261327 00:28:33.842 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1261327 00:28:33.842 Received shutdown signal, test time was about 2.000000 seconds 00:28:33.842 00:28:33.842 Latency(us) 00:28:33.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.842 =================================================================================================================== 00:28:33.842 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:33.842 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1261327 00:28:33.842 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:33.842 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:33.842 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:33.842 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:33.842 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:33.842 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:33.842 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:33.843 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1262115 00:28:33.843 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1262115 /var/tmp/bperf.sock 00:28:33.843 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1262115 ']' 00:28:33.843 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:33.843 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:33.843 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:33.843 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:33.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:33.843 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:33.843 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:33.843 [2024-07-15 13:59:00.183552] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:33.843 [2024-07-15 13:59:00.183606] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262115 ] 00:28:33.843 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:33.843 Zero copy mechanism will not be used. 00:28:33.843 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.843 [2024-07-15 13:59:00.258707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.843 [2024-07-15 13:59:00.312129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.784 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:34.784 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:34.784 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:34.784 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:34.784 13:59:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:34.784 13:59:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.784 13:59:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:35.044 nvme0n1 00:28:35.044 13:59:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:35.044 13:59:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:35.304 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:35.304 Zero copy mechanism will not be used. 00:28:35.304 Running I/O for 2 seconds... 00:28:37.216 00:28:37.216 Latency(us) 00:28:37.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.216 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:37.216 nvme0n1 : 2.00 3297.08 412.14 0.00 0.00 4845.47 3085.65 21845.33 00:28:37.216 =================================================================================================================== 00:28:37.216 Total : 3297.08 412.14 0.00 0.00 4845.47 3085.65 21845.33 00:28:37.216 0 00:28:37.216 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:37.216 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:37.216 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:37.216 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:37.216 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:37.216 | select(.opcode=="crc32c") 00:28:37.216 | "\(.module_name) \(.executed)"' 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1262115 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1262115 ']' 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1262115 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1262115 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1262115' 00:28:37.477 killing process with pid 1262115 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1262115 00:28:37.477 Received shutdown signal, test time was about 2.000000 seconds 00:28:37.477 00:28:37.477 Latency(us) 00:28:37.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.477 =================================================================================================================== 00:28:37.477 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1262115 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1259758 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1259758 ']' 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1259758 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1259758 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1259758' 00:28:37.477 killing process with pid 1259758 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1259758 00:28:37.477 13:59:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1259758 00:28:37.738 00:28:37.738 real 0m16.121s 00:28:37.738 user 0m31.690s 00:28:37.738 sys 0m3.196s 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:37.738 ************************************ 00:28:37.738 END TEST nvmf_digest_clean 00:28:37.738 ************************************ 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:37.738 ************************************ 00:28:37.738 START TEST nvmf_digest_error 00:28:37.738 ************************************ 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1263034 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1263034 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1263034 ']' 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:37.738 13:59:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:38.035 [2024-07-15 13:59:04.271082] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:38.035 [2024-07-15 13:59:04.271139] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.035 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.035 [2024-07-15 13:59:04.337887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.035 [2024-07-15 13:59:04.408134] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.035 [2024-07-15 13:59:04.408172] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.035 [2024-07-15 13:59:04.408180] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:38.035 [2024-07-15 13:59:04.408186] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.035 [2024-07-15 13:59:04.408192] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.035 [2024-07-15 13:59:04.408217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.606 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:38.606 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:38.606 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:38.606 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:38.606 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:38.607 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:38.607 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:38.607 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.607 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:38.607 [2024-07-15 13:59:05.070134] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:38.607 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.607 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:38.607 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:38.607 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.607 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:38.867 null0 00:28:38.867 [2024-07-15 13:59:05.150898] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.867 [2024-07-15 13:59:05.175072] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.867 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.867 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:38.867 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:38.867 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:38.867 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:38.867 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:38.867 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1263076 00:28:38.867 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1263076 /var/tmp/bperf.sock 00:28:38.867 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1263076 ']' 00:28:38.867 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:38.867 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:38.867 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:38.867 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:38.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:38.867 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:38.867 13:59:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:38.867 [2024-07-15 13:59:05.230935] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:38.867 [2024-07-15 13:59:05.230982] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263076 ] 00:28:38.867 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.867 [2024-07-15 13:59:05.306258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.867 [2024-07-15 13:59:05.360031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.808 13:59:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:39.808 13:59:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:39.808 13:59:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:39.808 13:59:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:39.808 13:59:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:39.808 13:59:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.808 13:59:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:39.808 13:59:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.808 13:59:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:39.808 13:59:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.068 nvme0n1 00:28:40.068 13:59:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:40.068 13:59:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.068 13:59:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.068 13:59:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.068 13:59:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:40.068 13:59:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:40.328 Running I/O for 2 seconds... 00:28:40.328 [2024-07-15 13:59:06.675824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.328 [2024-07-15 13:59:06.675855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.329 [2024-07-15 13:59:06.675864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.329 [2024-07-15 13:59:06.688126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.329 [2024-07-15 13:59:06.688145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.329 [2024-07-15 13:59:06.688153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.329 [2024-07-15 13:59:06.700728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.329 [2024-07-15 13:59:06.700747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.329 [2024-07-15 13:59:06.700753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.329 [2024-07-15 13:59:06.713013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.329 [2024-07-15 13:59:06.713031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.329 [2024-07-15 13:59:06.713042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.329 [2024-07-15 13:59:06.725683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.329 [2024-07-15 13:59:06.725701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.329 [2024-07-15 13:59:06.725708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.329 [2024-07-15 13:59:06.737618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.329 [2024-07-15 13:59:06.737634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.329 [2024-07-15 13:59:06.737641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.329 [2024-07-15 13:59:06.749981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.329 [2024-07-15 13:59:06.749997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.329 [2024-07-15 13:59:06.750004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.329 [2024-07-15 13:59:06.760755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.329 [2024-07-15 13:59:06.760772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.329 [2024-07-15 13:59:06.760778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.329 [2024-07-15 13:59:06.774368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.329 [2024-07-15 13:59:06.774385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.329 [2024-07-15 13:59:06.774391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.329 [2024-07-15 13:59:06.787291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.329 [2024-07-15 13:59:06.787308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.329 [2024-07-15 13:59:06.787314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.329 [2024-07-15 13:59:06.799907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.329 [2024-07-15 13:59:06.799924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.329 [2024-07-15 13:59:06.799930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.329 [2024-07-15 13:59:06.811485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.329 [2024-07-15 13:59:06.811501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.329 [2024-07-15 13:59:06.811507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.329 [2024-07-15 13:59:06.824507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.329 [2024-07-15 13:59:06.824527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.329 [2024-07-15 13:59:06.824533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.329 [2024-07-15 13:59:06.836559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.329 [2024-07-15 13:59:06.836576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.329 [2024-07-15 13:59:06.836582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.329 [2024-07-15 13:59:06.848794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.329 [2024-07-15 13:59:06.848812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.329 [2024-07-15 13:59:06.848819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.590 [2024-07-15 13:59:06.860855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.590 [2024-07-15 13:59:06.860872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.590 [2024-07-15 13:59:06.860878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.590 [2024-07-15 13:59:06.871636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.590 [2024-07-15 13:59:06.871653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.590 [2024-07-15 13:59:06.871659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.590 [2024-07-15 13:59:06.884243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.590 [2024-07-15 13:59:06.884260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.590 [2024-07-15 13:59:06.884267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.590 [2024-07-15 13:59:06.896058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.590 [2024-07-15 13:59:06.896074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.590 [2024-07-15 13:59:06.896080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.590 [2024-07-15 13:59:06.908987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.590 [2024-07-15 13:59:06.909005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.590 [2024-07-15 13:59:06.909011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.590 [2024-07-15 13:59:06.921111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.590 [2024-07-15 13:59:06.921130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.590 [2024-07-15 13:59:06.921136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.590 [2024-07-15 13:59:06.933209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.590 [2024-07-15 13:59:06.933225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.590 [2024-07-15 13:59:06.933232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.590 [2024-07-15 13:59:06.945328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.590 [2024-07-15 13:59:06.945345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.590 [2024-07-15 13:59:06.945351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.590 [2024-07-15 13:59:06.957445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.590 [2024-07-15 13:59:06.957462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.590 [2024-07-15 13:59:06.957468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.590 [2024-07-15 13:59:06.969748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.590 [2024-07-15 13:59:06.969765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.590 [2024-07-15 13:59:06.969771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.590 [2024-07-15 13:59:06.981798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.590 [2024-07-15 13:59:06.981816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.590 [2024-07-15 13:59:06.981822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.590 [2024-07-15 13:59:06.994333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.590 [2024-07-15 13:59:06.994350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.590 [2024-07-15 13:59:06.994356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.591 [2024-07-15 13:59:07.008118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.591 [2024-07-15 13:59:07.008138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.591 [2024-07-15 13:59:07.008144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.591 [2024-07-15 13:59:07.020226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.591 [2024-07-15 13:59:07.020243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.591 [2024-07-15 13:59:07.020250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.591 [2024-07-15 13:59:07.032538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.591 [2024-07-15 13:59:07.032554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.591 [2024-07-15 13:59:07.032563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.591 [2024-07-15 13:59:07.044251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.591 [2024-07-15 13:59:07.044269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.591 [2024-07-15 13:59:07.044276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.591 [2024-07-15 13:59:07.056124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.591 [2024-07-15 13:59:07.056141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.591 [2024-07-15 13:59:07.056148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.591 [2024-07-15 13:59:07.068386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.591 [2024-07-15 13:59:07.068403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.591 [2024-07-15 13:59:07.068409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.591 [2024-07-15 13:59:07.080375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.591 [2024-07-15 13:59:07.080392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.591 [2024-07-15 13:59:07.080398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.591 [2024-07-15 13:59:07.092768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.591 [2024-07-15 13:59:07.092785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.591 [2024-07-15 13:59:07.092792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.591 [2024-07-15 13:59:07.105306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.591 [2024-07-15 13:59:07.105323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.591 [2024-07-15 13:59:07.105330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.851 [2024-07-15 13:59:07.117251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.851 [2024-07-15 13:59:07.117269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.851 [2024-07-15 13:59:07.117275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.851 [2024-07-15 13:59:07.129869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.851 [2024-07-15 13:59:07.129887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.851 [2024-07-15 13:59:07.129894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.851 [2024-07-15 13:59:07.141928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.851 [2024-07-15 13:59:07.141949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.851 [2024-07-15 13:59:07.141955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.851 [2024-07-15 13:59:07.154341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.851 [2024-07-15 13:59:07.154358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.851 [2024-07-15 13:59:07.154364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.851 [2024-07-15 13:59:07.166961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.851 [2024-07-15 13:59:07.166978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.851 [2024-07-15 13:59:07.166984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.851 [2024-07-15 13:59:07.179202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.851 [2024-07-15 13:59:07.179219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.851 [2024-07-15 13:59:07.179225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.851 [2024-07-15 13:59:07.191384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.852 [2024-07-15 13:59:07.191401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.852 [2024-07-15 13:59:07.191408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.852 [2024-07-15 13:59:07.203610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.852 [2024-07-15 13:59:07.203627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.852 [2024-07-15 13:59:07.203633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.852 [2024-07-15 13:59:07.214882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.852 [2024-07-15 13:59:07.214899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.852 [2024-07-15 13:59:07.214905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.852 [2024-07-15 13:59:07.228202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.852 [2024-07-15 13:59:07.228219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.852 [2024-07-15 13:59:07.228225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.852 [2024-07-15 13:59:07.240393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.852 [2024-07-15 13:59:07.240411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.852 [2024-07-15 13:59:07.240417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.852 [2024-07-15 13:59:07.252485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.852 [2024-07-15 13:59:07.252502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.852 [2024-07-15 13:59:07.252508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.852 [2024-07-15 13:59:07.264199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.852 [2024-07-15 13:59:07.264216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.852 [2024-07-15 13:59:07.264222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.852 [2024-07-15 13:59:07.276860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.852 [2024-07-15 13:59:07.276877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.852 [2024-07-15 13:59:07.276883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.852 [2024-07-15 13:59:07.289752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.852 [2024-07-15 13:59:07.289768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.852 [2024-07-15 13:59:07.289775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.852 [2024-07-15 13:59:07.301684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.852 [2024-07-15 13:59:07.301700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.852 [2024-07-15 13:59:07.301706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.852 [2024-07-15 13:59:07.314177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.852 [2024-07-15 13:59:07.314193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.852 [2024-07-15 13:59:07.314199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.852 [2024-07-15 13:59:07.326423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.852 [2024-07-15 13:59:07.326439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.852 [2024-07-15 13:59:07.326445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.852 [2024-07-15 13:59:07.338727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.852 [2024-07-15 13:59:07.338743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.852 [2024-07-15 13:59:07.338749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.852 [2024-07-15 13:59:07.350798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.852 [2024-07-15 13:59:07.350816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.852 [2024-07-15 13:59:07.350825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.852 [2024-07-15 13:59:07.362493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.852 [2024-07-15 13:59:07.362510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.852 [2024-07-15 13:59:07.362516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.852 [2024-07-15 13:59:07.375105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:40.852 [2024-07-15 13:59:07.375125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.852 [2024-07-15 13:59:07.375131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.388295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.388312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.388318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.400323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.400340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.400346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.412559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.412575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.412582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.424531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.424548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.424555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.436740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.436756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.436763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.449436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.449453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.449460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.461952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.461969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.461975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.474766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.474784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.474790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.487013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.487029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.487035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.500365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.500382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.500389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.512698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.512715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.512720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.522626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.522642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.522649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.535863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.535880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.535886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.549365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.549382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.549388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.561257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.561273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.561283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.573361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.573378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.573384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.586444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.586461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.586467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.596484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.596501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.596507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.609574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.609591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.609598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.621546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.621563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.621569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.115 [2024-07-15 13:59:07.634573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.115 [2024-07-15 13:59:07.634590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.115 [2024-07-15 13:59:07.634596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.375 [2024-07-15 13:59:07.646084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.375 [2024-07-15 13:59:07.646102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.375 [2024-07-15 13:59:07.646108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.375 [2024-07-15 13:59:07.658062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.375 [2024-07-15 13:59:07.658078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.375 [2024-07-15 13:59:07.658084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.375 [2024-07-15 13:59:07.670610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.375 [2024-07-15 13:59:07.670630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.375 [2024-07-15 13:59:07.670636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.375 [2024-07-15 13:59:07.684019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.375 [2024-07-15 13:59:07.684036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.375 [2024-07-15 13:59:07.684042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.375 [2024-07-15 13:59:07.695615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.375 [2024-07-15 13:59:07.695631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.375 [2024-07-15 13:59:07.695637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.375 [2024-07-15 13:59:07.707224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.375 [2024-07-15 13:59:07.707240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.375 [2024-07-15 13:59:07.707246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.375 [2024-07-15 13:59:07.720137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.375 [2024-07-15 13:59:07.720153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.375 [2024-07-15 13:59:07.720159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.375 [2024-07-15 13:59:07.732386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.375 [2024-07-15 13:59:07.732403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.375 [2024-07-15 13:59:07.732409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.375 [2024-07-15 13:59:07.744769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.375 [2024-07-15 13:59:07.744786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.375 [2024-07-15 13:59:07.744792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.375 [2024-07-15 13:59:07.758804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.375 [2024-07-15 13:59:07.758821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.375 [2024-07-15 13:59:07.758827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.375 [2024-07-15 13:59:07.769561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.375 [2024-07-15 13:59:07.769577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.375 [2024-07-15 13:59:07.769584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.375 [2024-07-15 13:59:07.782465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.375 [2024-07-15 13:59:07.782482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.375 [2024-07-15 13:59:07.782488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.375 [2024-07-15 13:59:07.795664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.375 [2024-07-15 13:59:07.795681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.375 [2024-07-15 13:59:07.795687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.375 [2024-07-15 13:59:07.807836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.375 [2024-07-15 13:59:07.807852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.375 [2024-07-15 13:59:07.807858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.375 [2024-07-15 13:59:07.819234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.376 [2024-07-15 13:59:07.819250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.376 [2024-07-15 13:59:07.819256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.376 [2024-07-15 13:59:07.831314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.376 [2024-07-15 13:59:07.831330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.376 [2024-07-15 13:59:07.831336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.376 [2024-07-15 13:59:07.844595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.376 [2024-07-15 13:59:07.844611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.376 [2024-07-15 13:59:07.844617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.376 [2024-07-15 13:59:07.856243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.376 [2024-07-15 13:59:07.856259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.376 [2024-07-15 13:59:07.856266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.376 [2024-07-15 13:59:07.868342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.376 [2024-07-15 13:59:07.868359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.376 [2024-07-15 13:59:07.868365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.376 [2024-07-15 13:59:07.881000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.376 [2024-07-15 13:59:07.881017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.376 [2024-07-15 13:59:07.881030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.376 [2024-07-15 13:59:07.893070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.376 [2024-07-15 13:59:07.893087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.376 [2024-07-15 13:59:07.893093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.636 [2024-07-15 13:59:07.905218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.636 [2024-07-15 13:59:07.905236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.636 [2024-07-15 13:59:07.905243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.636 [2024-07-15 13:59:07.916895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.636 [2024-07-15 13:59:07.916912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.636 [2024-07-15 13:59:07.916918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.636 [2024-07-15 13:59:07.930002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.636 [2024-07-15 13:59:07.930018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.636 [2024-07-15 13:59:07.930024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.636 [2024-07-15 13:59:07.942811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.636 [2024-07-15 13:59:07.942828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.636 [2024-07-15 13:59:07.942834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.636 [2024-07-15 13:59:07.954460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.636 [2024-07-15 13:59:07.954477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.636 [2024-07-15 13:59:07.954483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.636 [2024-07-15 13:59:07.967671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.636 [2024-07-15 13:59:07.967688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.636 [2024-07-15 13:59:07.967694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.636 [2024-07-15 13:59:07.978966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.636 [2024-07-15 13:59:07.978983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.636 [2024-07-15 13:59:07.978989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.636 [2024-07-15 13:59:07.990711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.636 [2024-07-15 13:59:07.990728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.636 [2024-07-15 13:59:07.990734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.636 [2024-07-15 13:59:08.004878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.636 [2024-07-15 13:59:08.004895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.636 [2024-07-15 13:59:08.004901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.636 [2024-07-15 13:59:08.017322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.636 [2024-07-15 13:59:08.017339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.636 [2024-07-15 13:59:08.017345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.636 [2024-07-15 13:59:08.029594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.636 [2024-07-15 13:59:08.029611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.636 [2024-07-15 13:59:08.029617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.636 [2024-07-15 13:59:08.042088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.636 [2024-07-15 13:59:08.042105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.636 [2024-07-15 13:59:08.042111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.636 [2024-07-15 13:59:08.054319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.636 [2024-07-15 13:59:08.054335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.636 [2024-07-15 13:59:08.054341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.636 [2024-07-15 13:59:08.066161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.636 [2024-07-15 13:59:08.066178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.636 [2024-07-15 13:59:08.066184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.636 [2024-07-15 13:59:08.077763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.636 [2024-07-15 13:59:08.077779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.636 [2024-07-15 13:59:08.077785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.636 [2024-07-15 13:59:08.090999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.636 [2024-07-15 13:59:08.091015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.637 [2024-07-15 13:59:08.091025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.637 [2024-07-15 13:59:08.103460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.637 [2024-07-15 13:59:08.103476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.637 [2024-07-15 13:59:08.103482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.637 [2024-07-15 13:59:08.114682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.637 [2024-07-15 13:59:08.114698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.637 [2024-07-15 13:59:08.114704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.637 [2024-07-15 13:59:08.128063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.637 [2024-07-15 13:59:08.128079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.637 [2024-07-15 13:59:08.128085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.637 [2024-07-15 13:59:08.140037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.637 [2024-07-15 13:59:08.140055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.637 [2024-07-15 13:59:08.140062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.637 [2024-07-15 13:59:08.152329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.637 [2024-07-15 13:59:08.152346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.637 [2024-07-15 13:59:08.152352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.897 [2024-07-15 13:59:08.164400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.897 [2024-07-15 13:59:08.164417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.897 [2024-07-15 13:59:08.164423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.897 [2024-07-15 13:59:08.175850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.897 [2024-07-15 13:59:08.175866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.897 [2024-07-15 13:59:08.175872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.897 [2024-07-15 13:59:08.188527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.897 [2024-07-15 13:59:08.188544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.897 [2024-07-15 13:59:08.188550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.897 [2024-07-15 13:59:08.201837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.897 [2024-07-15 13:59:08.201857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.897 [2024-07-15 13:59:08.201863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.897 [2024-07-15 13:59:08.213667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.897 [2024-07-15 13:59:08.213683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.897 [2024-07-15 13:59:08.213690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.897 [2024-07-15 13:59:08.225796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.897 [2024-07-15 13:59:08.225813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.897 [2024-07-15 13:59:08.225819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.897 [2024-07-15 13:59:08.237683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.897 [2024-07-15 13:59:08.237699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.897 [2024-07-15 13:59:08.237706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.897 [2024-07-15 13:59:08.249683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.897 [2024-07-15 13:59:08.249700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.897 [2024-07-15 13:59:08.249706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.897 [2024-07-15 13:59:08.261651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.897 [2024-07-15 13:59:08.261668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.898 [2024-07-15 13:59:08.261674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.898 [2024-07-15 13:59:08.274609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.898 [2024-07-15 13:59:08.274626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.898 [2024-07-15 13:59:08.274632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.898 [2024-07-15 13:59:08.287034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.898 [2024-07-15 13:59:08.287050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.898 [2024-07-15 13:59:08.287057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.898 [2024-07-15 13:59:08.299472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.898 [2024-07-15 13:59:08.299489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.898 [2024-07-15 13:59:08.299495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.898 [2024-07-15 13:59:08.311546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.898 [2024-07-15 13:59:08.311562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.898 [2024-07-15 13:59:08.311568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.898 [2024-07-15 13:59:08.322525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.898 [2024-07-15 13:59:08.322542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.898 [2024-07-15 13:59:08.322547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.898 [2024-07-15 13:59:08.335286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.898 [2024-07-15 13:59:08.335302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.898 [2024-07-15 13:59:08.335309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.898 [2024-07-15 13:59:08.347694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.898 [2024-07-15 13:59:08.347712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.898 [2024-07-15 13:59:08.347718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.898 [2024-07-15 13:59:08.360904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.898 [2024-07-15 13:59:08.360921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.898 [2024-07-15 13:59:08.360928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.898 [2024-07-15 13:59:08.373321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.898 [2024-07-15 13:59:08.373338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.898 [2024-07-15 13:59:08.373344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.898 [2024-07-15 13:59:08.385044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.898 [2024-07-15 13:59:08.385062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.898 [2024-07-15 13:59:08.385068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.898 [2024-07-15 13:59:08.397751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.898 [2024-07-15 13:59:08.397768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.898 [2024-07-15 13:59:08.397775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.898 [2024-07-15 13:59:08.409631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.898 [2024-07-15 13:59:08.409648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.898 [2024-07-15 13:59:08.409657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.898 [2024-07-15 13:59:08.421729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:41.898 [2024-07-15 13:59:08.421746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.898 [2024-07-15 13:59:08.421752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.158 [2024-07-15 13:59:08.433818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:42.158 [2024-07-15 13:59:08.433835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.158 [2024-07-15 13:59:08.433841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.158 [2024-07-15 13:59:08.445428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:42.158 [2024-07-15 13:59:08.445446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.159 [2024-07-15 13:59:08.445453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.159 [2024-07-15 13:59:08.458001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:42.159 [2024-07-15 13:59:08.458018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.159 [2024-07-15 13:59:08.458024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.159 [2024-07-15 13:59:08.470732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:42.159 [2024-07-15 13:59:08.470749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.159 [2024-07-15 13:59:08.470755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.159 [2024-07-15 13:59:08.481769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:42.159 [2024-07-15 13:59:08.481785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.159 [2024-07-15 13:59:08.481791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.159 [2024-07-15 13:59:08.494337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:42.159 [2024-07-15 13:59:08.494353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.159 [2024-07-15 13:59:08.494360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.159 [2024-07-15 13:59:08.506744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:42.159 [2024-07-15 13:59:08.506761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.159 [2024-07-15 13:59:08.506767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.159 [2024-07-15 13:59:08.518231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:42.159 [2024-07-15 13:59:08.518252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.159 [2024-07-15 13:59:08.518259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.159 [2024-07-15 13:59:08.531009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:42.159 [2024-07-15 13:59:08.531026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.159 [2024-07-15 13:59:08.531033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.159 [2024-07-15 13:59:08.543399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:42.159 [2024-07-15 13:59:08.543415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.159 [2024-07-15 13:59:08.543421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.159 [2024-07-15 13:59:08.556239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:42.159 [2024-07-15 13:59:08.556256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.159 [2024-07-15 13:59:08.556262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.159 [2024-07-15 13:59:08.568448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:42.159 [2024-07-15 13:59:08.568464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.159 [2024-07-15 13:59:08.568470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.159 [2024-07-15 13:59:08.579945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:42.159 [2024-07-15 13:59:08.579962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.159 [2024-07-15 13:59:08.579968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.159 [2024-07-15 13:59:08.593483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:42.159 [2024-07-15 13:59:08.593500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.159 [2024-07-15 13:59:08.593506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.159 [2024-07-15 13:59:08.604772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:42.159 [2024-07-15 13:59:08.604788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.159 [2024-07-15 13:59:08.604794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.159 [2024-07-15 13:59:08.617346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:42.159 [2024-07-15 13:59:08.617362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.159 [2024-07-15 13:59:08.617371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.159 [2024-07-15 13:59:08.630274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:42.159 [2024-07-15 13:59:08.630291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.159 [2024-07-15 13:59:08.630297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.159 [2024-07-15 13:59:08.642374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:42.159 [2024-07-15 13:59:08.642390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.159 [2024-07-15 13:59:08.642396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.159 [2024-07-15 13:59:08.654691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16c58e0) 00:28:42.159 [2024-07-15 13:59:08.654709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.159 [2024-07-15 13:59:08.654715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.159 00:28:42.159 Latency(us) 00:28:42.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.159 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:42.159 nvme0n1 : 2.00 20719.21 80.93 0.00 0.00 6171.67 3290.45 16930.13 00:28:42.159 =================================================================================================================== 00:28:42.159 Total : 20719.21 80.93 0.00 0.00 6171.67 3290.45 16930.13 00:28:42.159 0 00:28:42.419 13:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:42.419 13:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:42.419 13:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:42.419 | .driver_specific 00:28:42.419 | .nvme_error 00:28:42.419 | .status_code 00:28:42.420 | .command_transient_transport_error' 00:28:42.420 13:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:42.420 13:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:28:42.420 13:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1263076 00:28:42.420 13:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1263076 ']' 00:28:42.420 13:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1263076 00:28:42.420 13:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:42.420 13:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:42.420 13:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1263076 00:28:42.420 13:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:42.420 13:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:42.420 13:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1263076' 00:28:42.420 killing process with pid 1263076 00:28:42.420 13:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1263076 00:28:42.420 Received shutdown signal, test time was about 2.000000 seconds 00:28:42.420 00:28:42.420 Latency(us) 00:28:42.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.420 =================================================================================================================== 00:28:42.420 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:42.420 13:59:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1263076 00:28:42.680 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:42.680 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:42.680 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:42.680 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:42.680 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:42.680 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1263890 00:28:42.680 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1263890 /var/tmp/bperf.sock 00:28:42.680 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1263890 ']' 00:28:42.680 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:42.680 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:42.680 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:42.680 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:42.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:42.680 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:42.680 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:42.680 [2024-07-15 13:59:09.074961] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:42.680 [2024-07-15 13:59:09.075028] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263890 ] 00:28:42.680 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:42.680 Zero copy mechanism will not be used. 00:28:42.680 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.680 [2024-07-15 13:59:09.152517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.941 [2024-07-15 13:59:09.205875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.512 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:43.512 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:43.512 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:43.512 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:43.512 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:43.512 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.512 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.512 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.512 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:43.512 13:59:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.084 nvme0n1 00:28:44.084 13:59:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:44.084 13:59:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.084 13:59:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.084 13:59:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.084 13:59:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:44.084 13:59:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:44.084 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:44.084 Zero copy mechanism will not be used. 00:28:44.084 Running I/O for 2 seconds... 00:28:44.084 [2024-07-15 13:59:10.460385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.084 [2024-07-15 13:59:10.460417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.084 [2024-07-15 13:59:10.460426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.084 [2024-07-15 13:59:10.476140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.084 [2024-07-15 13:59:10.476162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.084 [2024-07-15 13:59:10.476170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.084 [2024-07-15 13:59:10.490342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.084 [2024-07-15 13:59:10.490361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.084 [2024-07-15 13:59:10.490368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.084 [2024-07-15 13:59:10.504689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.084 [2024-07-15 13:59:10.504708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.084 [2024-07-15 13:59:10.504715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.084 [2024-07-15 13:59:10.518747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.084 [2024-07-15 13:59:10.518765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.084 [2024-07-15 13:59:10.518772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.084 [2024-07-15 13:59:10.531894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.084 [2024-07-15 13:59:10.531911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.084 [2024-07-15 13:59:10.531918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.084 [2024-07-15 13:59:10.546645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.084 [2024-07-15 13:59:10.546667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.084 [2024-07-15 13:59:10.546673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.084 [2024-07-15 13:59:10.559964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.084 [2024-07-15 13:59:10.559982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.084 [2024-07-15 13:59:10.559988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.084 [2024-07-15 13:59:10.574158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.084 [2024-07-15 13:59:10.574175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.084 [2024-07-15 13:59:10.574181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.084 [2024-07-15 13:59:10.587733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.084 [2024-07-15 13:59:10.587751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.084 [2024-07-15 13:59:10.587757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.084 [2024-07-15 13:59:10.601695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.084 [2024-07-15 13:59:10.601713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.084 [2024-07-15 13:59:10.601719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.346 [2024-07-15 13:59:10.616109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.346 [2024-07-15 13:59:10.616131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-07-15 13:59:10.616138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.346 [2024-07-15 13:59:10.625310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.346 [2024-07-15 13:59:10.625327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-07-15 13:59:10.625333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.346 [2024-07-15 13:59:10.639349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.346 [2024-07-15 13:59:10.639367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-07-15 13:59:10.639374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.346 [2024-07-15 13:59:10.654351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.346 [2024-07-15 13:59:10.654368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-07-15 13:59:10.654378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.346 [2024-07-15 13:59:10.669576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.346 [2024-07-15 13:59:10.669594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-07-15 13:59:10.669600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.346 [2024-07-15 13:59:10.683616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.346 [2024-07-15 13:59:10.683634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-07-15 13:59:10.683640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.346 [2024-07-15 13:59:10.698925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.346 [2024-07-15 13:59:10.698943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-07-15 13:59:10.698949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.346 [2024-07-15 13:59:10.713402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.346 [2024-07-15 13:59:10.713420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-07-15 13:59:10.713426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.346 [2024-07-15 13:59:10.727257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.346 [2024-07-15 13:59:10.727274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-07-15 13:59:10.727280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.346 [2024-07-15 13:59:10.741467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.346 [2024-07-15 13:59:10.741484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-07-15 13:59:10.741490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.346 [2024-07-15 13:59:10.756588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.346 [2024-07-15 13:59:10.756606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-07-15 13:59:10.756612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.346 [2024-07-15 13:59:10.769494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.346 [2024-07-15 13:59:10.769510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-07-15 13:59:10.769517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.346 [2024-07-15 13:59:10.781277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.346 [2024-07-15 13:59:10.781298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-07-15 13:59:10.781304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.346 [2024-07-15 13:59:10.796037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.346 [2024-07-15 13:59:10.796055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-07-15 13:59:10.796061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.346 [2024-07-15 13:59:10.810590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.346 [2024-07-15 13:59:10.810608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-07-15 13:59:10.810614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.346 [2024-07-15 13:59:10.823260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.346 [2024-07-15 13:59:10.823278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-07-15 13:59:10.823284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.346 [2024-07-15 13:59:10.837547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.346 [2024-07-15 13:59:10.837564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-07-15 13:59:10.837571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.346 [2024-07-15 13:59:10.851308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.346 [2024-07-15 13:59:10.851326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-07-15 13:59:10.851332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.346 [2024-07-15 13:59:10.864150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.346 [2024-07-15 13:59:10.864168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.346 [2024-07-15 13:59:10.864174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.608 [2024-07-15 13:59:10.878736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.608 [2024-07-15 13:59:10.878754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.608 [2024-07-15 13:59:10.878760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.608 [2024-07-15 13:59:10.893439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.608 [2024-07-15 13:59:10.893457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.608 [2024-07-15 13:59:10.893463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.608 [2024-07-15 13:59:10.906970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.608 [2024-07-15 13:59:10.906989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.608 [2024-07-15 13:59:10.906995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.608 [2024-07-15 13:59:10.921491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.608 [2024-07-15 13:59:10.921509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.608 [2024-07-15 13:59:10.921515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.608 [2024-07-15 13:59:10.935433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.608 [2024-07-15 13:59:10.935451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.608 [2024-07-15 13:59:10.935457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.608 [2024-07-15 13:59:10.950391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.608 [2024-07-15 13:59:10.950409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.608 [2024-07-15 13:59:10.950415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.608 [2024-07-15 13:59:10.965530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.608 [2024-07-15 13:59:10.965548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.608 [2024-07-15 13:59:10.965554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.608 [2024-07-15 13:59:10.977989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.608 [2024-07-15 13:59:10.978007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.608 [2024-07-15 13:59:10.978013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.608 [2024-07-15 13:59:10.989865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.608 [2024-07-15 13:59:10.989883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.608 [2024-07-15 13:59:10.989890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.608 [2024-07-15 13:59:11.002676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.608 [2024-07-15 13:59:11.002695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.608 [2024-07-15 13:59:11.002701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.608 [2024-07-15 13:59:11.016619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.608 [2024-07-15 13:59:11.016637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.608 [2024-07-15 13:59:11.016646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.608 [2024-07-15 13:59:11.031599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.608 [2024-07-15 13:59:11.031616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.609 [2024-07-15 13:59:11.031622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.609 [2024-07-15 13:59:11.045647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.609 [2024-07-15 13:59:11.045665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.609 [2024-07-15 13:59:11.045672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.609 [2024-07-15 13:59:11.059089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.609 [2024-07-15 13:59:11.059108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.609 [2024-07-15 13:59:11.059114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.609 [2024-07-15 13:59:11.072591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.609 [2024-07-15 13:59:11.072610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.609 [2024-07-15 13:59:11.072616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.609 [2024-07-15 13:59:11.086242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.609 [2024-07-15 13:59:11.086260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.609 [2024-07-15 13:59:11.086267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.609 [2024-07-15 13:59:11.099552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.609 [2024-07-15 13:59:11.099570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.609 [2024-07-15 13:59:11.099576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.609 [2024-07-15 13:59:11.113884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.609 [2024-07-15 13:59:11.113902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.609 [2024-07-15 13:59:11.113908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.609 [2024-07-15 13:59:11.128590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.609 [2024-07-15 13:59:11.128609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.609 [2024-07-15 13:59:11.128615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.870 [2024-07-15 13:59:11.142399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.870 [2024-07-15 13:59:11.142424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-07-15 13:59:11.142430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.870 [2024-07-15 13:59:11.157020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.870 [2024-07-15 13:59:11.157039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-07-15 13:59:11.157045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.870 [2024-07-15 13:59:11.171735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.870 [2024-07-15 13:59:11.171753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-07-15 13:59:11.171760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.870 [2024-07-15 13:59:11.184442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.870 [2024-07-15 13:59:11.184460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-07-15 13:59:11.184467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.870 [2024-07-15 13:59:11.197506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.870 [2024-07-15 13:59:11.197524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-07-15 13:59:11.197530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.870 [2024-07-15 13:59:11.209755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.870 [2024-07-15 13:59:11.209773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-07-15 13:59:11.209780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.870 [2024-07-15 13:59:11.222979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.870 [2024-07-15 13:59:11.222997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-07-15 13:59:11.223003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.870 [2024-07-15 13:59:11.236711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.870 [2024-07-15 13:59:11.236728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-07-15 13:59:11.236734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.870 [2024-07-15 13:59:11.250447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.870 [2024-07-15 13:59:11.250464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-07-15 13:59:11.250471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.870 [2024-07-15 13:59:11.264860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.870 [2024-07-15 13:59:11.264878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-07-15 13:59:11.264884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.870 [2024-07-15 13:59:11.278516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.870 [2024-07-15 13:59:11.278534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-07-15 13:59:11.278540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.870 [2024-07-15 13:59:11.292558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.870 [2024-07-15 13:59:11.292575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-07-15 13:59:11.292582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.870 [2024-07-15 13:59:11.307196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.870 [2024-07-15 13:59:11.307214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-07-15 13:59:11.307220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.870 [2024-07-15 13:59:11.319569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.870 [2024-07-15 13:59:11.319587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-07-15 13:59:11.319593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.870 [2024-07-15 13:59:11.335524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.870 [2024-07-15 13:59:11.335541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-07-15 13:59:11.335547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.870 [2024-07-15 13:59:11.349658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.870 [2024-07-15 13:59:11.349676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-07-15 13:59:11.349682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.870 [2024-07-15 13:59:11.361537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.870 [2024-07-15 13:59:11.361555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-07-15 13:59:11.361561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.870 [2024-07-15 13:59:11.375154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.870 [2024-07-15 13:59:11.375171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-07-15 13:59:11.375180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.870 [2024-07-15 13:59:11.389391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:44.870 [2024-07-15 13:59:11.389409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-07-15 13:59:11.389415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.131 [2024-07-15 13:59:11.404846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.131 [2024-07-15 13:59:11.404863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-07-15 13:59:11.404870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.131 [2024-07-15 13:59:11.420790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.131 [2024-07-15 13:59:11.420808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-07-15 13:59:11.420814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.131 [2024-07-15 13:59:11.435784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.131 [2024-07-15 13:59:11.435801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-07-15 13:59:11.435808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.131 [2024-07-15 13:59:11.450839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.131 [2024-07-15 13:59:11.450856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-07-15 13:59:11.450862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.131 [2024-07-15 13:59:11.466599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.131 [2024-07-15 13:59:11.466616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-07-15 13:59:11.466623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.131 [2024-07-15 13:59:11.481754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.131 [2024-07-15 13:59:11.481772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-07-15 13:59:11.481778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.131 [2024-07-15 13:59:11.496953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.131 [2024-07-15 13:59:11.496971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-07-15 13:59:11.496977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.131 [2024-07-15 13:59:11.509962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.131 [2024-07-15 13:59:11.509980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-07-15 13:59:11.509986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.131 [2024-07-15 13:59:11.522881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.131 [2024-07-15 13:59:11.522899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-07-15 13:59:11.522905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.131 [2024-07-15 13:59:11.535639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.131 [2024-07-15 13:59:11.535657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-07-15 13:59:11.535663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.131 [2024-07-15 13:59:11.549209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.131 [2024-07-15 13:59:11.549227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-07-15 13:59:11.549233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.131 [2024-07-15 13:59:11.562643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.131 [2024-07-15 13:59:11.562660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-07-15 13:59:11.562667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.131 [2024-07-15 13:59:11.577366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.131 [2024-07-15 13:59:11.577384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-07-15 13:59:11.577390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.131 [2024-07-15 13:59:11.592741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.131 [2024-07-15 13:59:11.592759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-07-15 13:59:11.592765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.131 [2024-07-15 13:59:11.607783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.131 [2024-07-15 13:59:11.607801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-07-15 13:59:11.607807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.131 [2024-07-15 13:59:11.623162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.131 [2024-07-15 13:59:11.623179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-07-15 13:59:11.623188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.131 [2024-07-15 13:59:11.637005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.131 [2024-07-15 13:59:11.637022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-07-15 13:59:11.637028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.131 [2024-07-15 13:59:11.652189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.131 [2024-07-15 13:59:11.652207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-07-15 13:59:11.652213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.391 [2024-07-15 13:59:11.665927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.391 [2024-07-15 13:59:11.665945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.391 [2024-07-15 13:59:11.665952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.391 [2024-07-15 13:59:11.680630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.391 [2024-07-15 13:59:11.680648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.391 [2024-07-15 13:59:11.680654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.391 [2024-07-15 13:59:11.695316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.391 [2024-07-15 13:59:11.695333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.391 [2024-07-15 13:59:11.695340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.391 [2024-07-15 13:59:11.709945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.391 [2024-07-15 13:59:11.709962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.391 [2024-07-15 13:59:11.709968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.391 [2024-07-15 13:59:11.723691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.391 [2024-07-15 13:59:11.723709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.391 [2024-07-15 13:59:11.723715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.391 [2024-07-15 13:59:11.738452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.391 [2024-07-15 13:59:11.738470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.391 [2024-07-15 13:59:11.738476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.391 [2024-07-15 13:59:11.752113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.391 [2024-07-15 13:59:11.752138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.391 [2024-07-15 13:59:11.752144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.391 [2024-07-15 13:59:11.763845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.391 [2024-07-15 13:59:11.763863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.391 [2024-07-15 13:59:11.763869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.392 [2024-07-15 13:59:11.778443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.392 [2024-07-15 13:59:11.778461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.392 [2024-07-15 13:59:11.778467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.392 [2024-07-15 13:59:11.793463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.392 [2024-07-15 13:59:11.793481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.392 [2024-07-15 13:59:11.793487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.392 [2024-07-15 13:59:11.808892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.392 [2024-07-15 13:59:11.808910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.392 [2024-07-15 13:59:11.808916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.392 [2024-07-15 13:59:11.824021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.392 [2024-07-15 13:59:11.824039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.392 [2024-07-15 13:59:11.824045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.392 [2024-07-15 13:59:11.836799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.392 [2024-07-15 13:59:11.836817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.392 [2024-07-15 13:59:11.836823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.392 [2024-07-15 13:59:11.849116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.392 [2024-07-15 13:59:11.849139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.392 [2024-07-15 13:59:11.849145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.392 [2024-07-15 13:59:11.858972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.392 [2024-07-15 13:59:11.858990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.392 [2024-07-15 13:59:11.858997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.392 [2024-07-15 13:59:11.871799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.392 [2024-07-15 13:59:11.871817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.392 [2024-07-15 13:59:11.871823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.392 [2024-07-15 13:59:11.885858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.392 [2024-07-15 13:59:11.885876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.392 [2024-07-15 13:59:11.885882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.392 [2024-07-15 13:59:11.900465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.392 [2024-07-15 13:59:11.900483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.392 [2024-07-15 13:59:11.900489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.392 [2024-07-15 13:59:11.915128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.392 [2024-07-15 13:59:11.915145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.392 [2024-07-15 13:59:11.915151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.652 [2024-07-15 13:59:11.929225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.652 [2024-07-15 13:59:11.929244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.652 [2024-07-15 13:59:11.929250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.652 [2024-07-15 13:59:11.942943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.652 [2024-07-15 13:59:11.942961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.652 [2024-07-15 13:59:11.942967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.652 [2024-07-15 13:59:11.955224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.652 [2024-07-15 13:59:11.955242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.652 [2024-07-15 13:59:11.955249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.652 [2024-07-15 13:59:11.967799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.652 [2024-07-15 13:59:11.967817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.652 [2024-07-15 13:59:11.967823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.652 [2024-07-15 13:59:11.982037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.652 [2024-07-15 13:59:11.982055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.652 [2024-07-15 13:59:11.982065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.652 [2024-07-15 13:59:11.996189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.652 [2024-07-15 13:59:11.996207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.652 [2024-07-15 13:59:11.996213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.652 [2024-07-15 13:59:12.009597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.652 [2024-07-15 13:59:12.009615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.652 [2024-07-15 13:59:12.009621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.652 [2024-07-15 13:59:12.021851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.652 [2024-07-15 13:59:12.021869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.653 [2024-07-15 13:59:12.021875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.653 [2024-07-15 13:59:12.035278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.653 [2024-07-15 13:59:12.035296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.653 [2024-07-15 13:59:12.035302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.653 [2024-07-15 13:59:12.048178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.653 [2024-07-15 13:59:12.048196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.653 [2024-07-15 13:59:12.048202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.653 [2024-07-15 13:59:12.060923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.653 [2024-07-15 13:59:12.060941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.653 [2024-07-15 13:59:12.060947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.653 [2024-07-15 13:59:12.073623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.653 [2024-07-15 13:59:12.073640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.653 [2024-07-15 13:59:12.073647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.653 [2024-07-15 13:59:12.086068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.653 [2024-07-15 13:59:12.086085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.653 [2024-07-15 13:59:12.086091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.653 [2024-07-15 13:59:12.101382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.653 [2024-07-15 13:59:12.101403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.653 [2024-07-15 13:59:12.101409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.653 [2024-07-15 13:59:12.116305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.653 [2024-07-15 13:59:12.116322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.653 [2024-07-15 13:59:12.116328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.653 [2024-07-15 13:59:12.130430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.653 [2024-07-15 13:59:12.130448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.653 [2024-07-15 13:59:12.130454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.653 [2024-07-15 13:59:12.144952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.653 [2024-07-15 13:59:12.144970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.653 [2024-07-15 13:59:12.144976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.653 [2024-07-15 13:59:12.157892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.653 [2024-07-15 13:59:12.157910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.653 [2024-07-15 13:59:12.157916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.653 [2024-07-15 13:59:12.172954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.653 [2024-07-15 13:59:12.172972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.653 [2024-07-15 13:59:12.172978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.914 [2024-07-15 13:59:12.187377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.914 [2024-07-15 13:59:12.187395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.914 [2024-07-15 13:59:12.187401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.914 [2024-07-15 13:59:12.201113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.914 [2024-07-15 13:59:12.201135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.914 [2024-07-15 13:59:12.201141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.914 [2024-07-15 13:59:12.214686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.914 [2024-07-15 13:59:12.214705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.914 [2024-07-15 13:59:12.214711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.914 [2024-07-15 13:59:12.229749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.914 [2024-07-15 13:59:12.229767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.914 [2024-07-15 13:59:12.229774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.914 [2024-07-15 13:59:12.242937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.914 [2024-07-15 13:59:12.242955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.914 [2024-07-15 13:59:12.242961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.914 [2024-07-15 13:59:12.256572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.914 [2024-07-15 13:59:12.256590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.914 [2024-07-15 13:59:12.256596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.914 [2024-07-15 13:59:12.271442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.914 [2024-07-15 13:59:12.271461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.914 [2024-07-15 13:59:12.271467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.914 [2024-07-15 13:59:12.282808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.914 [2024-07-15 13:59:12.282827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.914 [2024-07-15 13:59:12.282833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.914 [2024-07-15 13:59:12.297428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.914 [2024-07-15 13:59:12.297447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.914 [2024-07-15 13:59:12.297453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.914 [2024-07-15 13:59:12.313056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.914 [2024-07-15 13:59:12.313073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.914 [2024-07-15 13:59:12.313080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.914 [2024-07-15 13:59:12.327324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.914 [2024-07-15 13:59:12.327342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.914 [2024-07-15 13:59:12.327350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.914 [2024-07-15 13:59:12.341727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.914 [2024-07-15 13:59:12.341745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.914 [2024-07-15 13:59:12.341754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.914 [2024-07-15 13:59:12.352628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.914 [2024-07-15 13:59:12.352645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.914 [2024-07-15 13:59:12.352652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.914 [2024-07-15 13:59:12.367482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.914 [2024-07-15 13:59:12.367500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.914 [2024-07-15 13:59:12.367506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.914 [2024-07-15 13:59:12.378807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.914 [2024-07-15 13:59:12.378825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.914 [2024-07-15 13:59:12.378831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.914 [2024-07-15 13:59:12.392153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.914 [2024-07-15 13:59:12.392170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.914 [2024-07-15 13:59:12.392177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.914 [2024-07-15 13:59:12.405235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.914 [2024-07-15 13:59:12.405253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.914 [2024-07-15 13:59:12.405259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.914 [2024-07-15 13:59:12.415663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.914 [2024-07-15 13:59:12.415681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.914 [2024-07-15 13:59:12.415687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.914 [2024-07-15 13:59:12.428772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:45.914 [2024-07-15 13:59:12.428789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.914 [2024-07-15 13:59:12.428795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.176 [2024-07-15 13:59:12.441383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:46.176 [2024-07-15 13:59:12.441400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.176 [2024-07-15 13:59:12.441407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.176 [2024-07-15 13:59:12.453457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13a1b80) 00:28:46.176 [2024-07-15 13:59:12.453475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.176 [2024-07-15 13:59:12.453482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.176 00:28:46.176 Latency(us) 00:28:46.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.176 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:46.176 nvme0n1 : 2.01 2243.04 280.38 0.00 0.00 7127.41 1604.27 16056.32 00:28:46.176 =================================================================================================================== 00:28:46.176 Total : 2243.04 280.38 0.00 0.00 7127.41 1604.27 16056.32 00:28:46.176 0 00:28:46.176 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:46.176 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:46.176 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:46.176 | .driver_specific 00:28:46.176 | .nvme_error 00:28:46.176 | .status_code 00:28:46.176 | .command_transient_transport_error' 00:28:46.176 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:46.176 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:28:46.176 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1263890 00:28:46.176 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1263890 ']' 00:28:46.176 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1263890 00:28:46.176 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:46.176 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:46.176 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1263890 00:28:46.176 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:46.176 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:46.176 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1263890' 00:28:46.176 killing process with pid 1263890 00:28:46.176 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1263890 00:28:46.176 Received shutdown signal, test time was about 2.000000 seconds 00:28:46.176 00:28:46.176 Latency(us) 00:28:46.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.176 =================================================================================================================== 00:28:46.176 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:46.176 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1263890 00:28:46.436 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:46.436 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:46.436 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:46.436 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:46.436 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:46.436 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1264691 00:28:46.436 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1264691 /var/tmp/bperf.sock 00:28:46.436 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1264691 ']' 00:28:46.436 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:46.436 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:46.436 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:46.436 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:46.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:46.436 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:46.436 13:59:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:46.436 [2024-07-15 13:59:12.866383] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:46.436 [2024-07-15 13:59:12.866444] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264691 ] 00:28:46.436 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.437 [2024-07-15 13:59:12.942659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.698 [2024-07-15 13:59:12.995889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.267 13:59:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:47.267 13:59:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:47.267 13:59:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:47.267 13:59:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:47.267 13:59:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:47.267 13:59:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.267 13:59:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.527 13:59:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.527 13:59:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.527 13:59:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.527 nvme0n1 00:28:47.788 13:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:47.788 13:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.788 13:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.788 13:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.788 13:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:47.788 13:59:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:47.788 Running I/O for 2 seconds... 00:28:47.788 [2024-07-15 13:59:14.182987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:47.788 [2024-07-15 13:59:14.183407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.788 [2024-07-15 13:59:14.183432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:47.788 [2024-07-15 13:59:14.195213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:47.789 [2024-07-15 13:59:14.195611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.789 [2024-07-15 13:59:14.195628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:47.789 [2024-07-15 13:59:14.207367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:47.789 [2024-07-15 13:59:14.207767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.789 [2024-07-15 13:59:14.207783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:47.789 [2024-07-15 13:59:14.219524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:47.789 [2024-07-15 13:59:14.219918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.789 [2024-07-15 13:59:14.219933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:47.789 [2024-07-15 13:59:14.231667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:47.789 [2024-07-15 13:59:14.232075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.789 [2024-07-15 13:59:14.232091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:47.789 [2024-07-15 13:59:14.243856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:47.789 [2024-07-15 13:59:14.244107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.789 [2024-07-15 13:59:14.244129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:47.789 [2024-07-15 13:59:14.255998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:47.789 [2024-07-15 13:59:14.256387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.789 [2024-07-15 13:59:14.256401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:47.789 [2024-07-15 13:59:14.268132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:47.789 [2024-07-15 13:59:14.268592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.789 [2024-07-15 13:59:14.268607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:47.789 [2024-07-15 13:59:14.280396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:47.789 [2024-07-15 13:59:14.280798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.789 [2024-07-15 13:59:14.280813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:47.789 [2024-07-15 13:59:14.292561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:47.789 [2024-07-15 13:59:14.292942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.789 [2024-07-15 13:59:14.292957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:47.789 [2024-07-15 13:59:14.304723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:47.789 [2024-07-15 13:59:14.305182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.789 [2024-07-15 13:59:14.305197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.050 [2024-07-15 13:59:14.316819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.050 [2024-07-15 13:59:14.317197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.050 [2024-07-15 13:59:14.317212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.050 [2024-07-15 13:59:14.328948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.050 [2024-07-15 13:59:14.329218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.050 [2024-07-15 13:59:14.329232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.050 [2024-07-15 13:59:14.341127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.050 [2024-07-15 13:59:14.341376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.050 [2024-07-15 13:59:14.341391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.050 [2024-07-15 13:59:14.353207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.050 [2024-07-15 13:59:14.353600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.050 [2024-07-15 13:59:14.353614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.050 [2024-07-15 13:59:14.365338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.050 [2024-07-15 13:59:14.365766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.050 [2024-07-15 13:59:14.365781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.050 [2024-07-15 13:59:14.377482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.050 [2024-07-15 13:59:14.377864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.050 [2024-07-15 13:59:14.377879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.050 [2024-07-15 13:59:14.389611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.050 [2024-07-15 13:59:14.389886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.050 [2024-07-15 13:59:14.389904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.050 [2024-07-15 13:59:14.401870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.050 [2024-07-15 13:59:14.402325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.050 [2024-07-15 13:59:14.402339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.050 [2024-07-15 13:59:14.413972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.050 [2024-07-15 13:59:14.414253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.051 [2024-07-15 13:59:14.414268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.051 [2024-07-15 13:59:14.426047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.051 [2024-07-15 13:59:14.426410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.051 [2024-07-15 13:59:14.426425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.051 [2024-07-15 13:59:14.438226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.051 [2024-07-15 13:59:14.438610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.051 [2024-07-15 13:59:14.438626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.051 [2024-07-15 13:59:14.450419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.051 [2024-07-15 13:59:14.450665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.051 [2024-07-15 13:59:14.450680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.051 [2024-07-15 13:59:14.462497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.051 [2024-07-15 13:59:14.462781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.051 [2024-07-15 13:59:14.462796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.051 [2024-07-15 13:59:14.474677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.051 [2024-07-15 13:59:14.475132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.051 [2024-07-15 13:59:14.475147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.051 [2024-07-15 13:59:14.486750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.051 [2024-07-15 13:59:14.487147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.051 [2024-07-15 13:59:14.487163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.051 [2024-07-15 13:59:14.498858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.051 [2024-07-15 13:59:14.499267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.051 [2024-07-15 13:59:14.499285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.051 [2024-07-15 13:59:14.511062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.051 [2024-07-15 13:59:14.511340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.051 [2024-07-15 13:59:14.511362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.051 [2024-07-15 13:59:14.523197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.051 [2024-07-15 13:59:14.523563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.051 [2024-07-15 13:59:14.523578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.051 [2024-07-15 13:59:14.535311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.051 [2024-07-15 13:59:14.535700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.051 [2024-07-15 13:59:14.535714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.051 [2024-07-15 13:59:14.547439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.051 [2024-07-15 13:59:14.547906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.051 [2024-07-15 13:59:14.547922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.051 [2024-07-15 13:59:14.559573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.051 [2024-07-15 13:59:14.560036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.051 [2024-07-15 13:59:14.560051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.051 [2024-07-15 13:59:14.571692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.051 [2024-07-15 13:59:14.571954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.051 [2024-07-15 13:59:14.571970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.311 [2024-07-15 13:59:14.583844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.311 [2024-07-15 13:59:14.584328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.311 [2024-07-15 13:59:14.584343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.311 [2024-07-15 13:59:14.595943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.311 [2024-07-15 13:59:14.596212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.311 [2024-07-15 13:59:14.596227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.311 [2024-07-15 13:59:14.608033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.311 [2024-07-15 13:59:14.608513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.312 [2024-07-15 13:59:14.608529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.312 [2024-07-15 13:59:14.620157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.312 [2024-07-15 13:59:14.620635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.312 [2024-07-15 13:59:14.620650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.312 [2024-07-15 13:59:14.632249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.312 [2024-07-15 13:59:14.632598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.312 [2024-07-15 13:59:14.632613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.312 [2024-07-15 13:59:14.644361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.312 [2024-07-15 13:59:14.644734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.312 [2024-07-15 13:59:14.644749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.312 [2024-07-15 13:59:14.656471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.312 [2024-07-15 13:59:14.656927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.312 [2024-07-15 13:59:14.656942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.312 [2024-07-15 13:59:14.668570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.312 [2024-07-15 13:59:14.668972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.312 [2024-07-15 13:59:14.668987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.312 [2024-07-15 13:59:14.680732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.312 [2024-07-15 13:59:14.680981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.312 [2024-07-15 13:59:14.680996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.312 [2024-07-15 13:59:14.692835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.312 [2024-07-15 13:59:14.693086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.312 [2024-07-15 13:59:14.693100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.312 [2024-07-15 13:59:14.704929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.312 [2024-07-15 13:59:14.705181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.312 [2024-07-15 13:59:14.705196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.312 [2024-07-15 13:59:14.717056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.312 [2024-07-15 13:59:14.717399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.312 [2024-07-15 13:59:14.717415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.312 [2024-07-15 13:59:14.729173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.312 [2024-07-15 13:59:14.729573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.312 [2024-07-15 13:59:14.729588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.312 [2024-07-15 13:59:14.741283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.312 [2024-07-15 13:59:14.741670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.312 [2024-07-15 13:59:14.741685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.312 [2024-07-15 13:59:14.753439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.312 [2024-07-15 13:59:14.753826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.312 [2024-07-15 13:59:14.753841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.312 [2024-07-15 13:59:14.765514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.312 [2024-07-15 13:59:14.765872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.312 [2024-07-15 13:59:14.765887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.312 [2024-07-15 13:59:14.777649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.312 [2024-07-15 13:59:14.777908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.312 [2024-07-15 13:59:14.777924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.312 [2024-07-15 13:59:14.789732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.312 [2024-07-15 13:59:14.789991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.312 [2024-07-15 13:59:14.790006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.312 [2024-07-15 13:59:14.801848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.312 [2024-07-15 13:59:14.802199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.312 [2024-07-15 13:59:14.802214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.312 [2024-07-15 13:59:14.813932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.312 [2024-07-15 13:59:14.814192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.312 [2024-07-15 13:59:14.814209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.312 [2024-07-15 13:59:14.826002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.312 [2024-07-15 13:59:14.826277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.312 [2024-07-15 13:59:14.826292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:14.838145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:14.838493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:14.838508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:14.850246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:14.850638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:14.850653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:14.862380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:14.862749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:14.862764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:14.874467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:14.874716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:14.874731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:14.886604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:14.886856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:14.886870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:14.898748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:14.899125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:14.899140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:14.910854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:14.911117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:14.911135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:14.922960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:14.923401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:14.923416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:14.935067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:14.935457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:14.935472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:14.947192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:14.947442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:14.947457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:14.959299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:14.959695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:14.959711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:14.971451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:14.971723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:14.971738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:14.983502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:14.983876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:14.983891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:14.995601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:14.995952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:14.995967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:15.007761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:15.008007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:15.008028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:15.019871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:15.020250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:15.020265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:15.031926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:15.032284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:15.032299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:15.044016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:15.044406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:15.044421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:15.056180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:15.056428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:15.056449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:15.068269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:15.068747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:15.068762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:15.080447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:15.080698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:15.080713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.573 [2024-07-15 13:59:15.092726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.573 [2024-07-15 13:59:15.093047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.573 [2024-07-15 13:59:15.093062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.104820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.835 [2024-07-15 13:59:15.105173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.835 [2024-07-15 13:59:15.105188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.116931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.835 [2024-07-15 13:59:15.117218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.835 [2024-07-15 13:59:15.117233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.129015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.835 [2024-07-15 13:59:15.129285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.835 [2024-07-15 13:59:15.129300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.141130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.835 [2024-07-15 13:59:15.141398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.835 [2024-07-15 13:59:15.141413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.153232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.835 [2024-07-15 13:59:15.153624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.835 [2024-07-15 13:59:15.153640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.165407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.835 [2024-07-15 13:59:15.165655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.835 [2024-07-15 13:59:15.165670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.177541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.835 [2024-07-15 13:59:15.177808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.835 [2024-07-15 13:59:15.177823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.189647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.835 [2024-07-15 13:59:15.189896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.835 [2024-07-15 13:59:15.189910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.201709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.835 [2024-07-15 13:59:15.201976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.835 [2024-07-15 13:59:15.201991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.213894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.835 [2024-07-15 13:59:15.214148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.835 [2024-07-15 13:59:15.214163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.226023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.835 [2024-07-15 13:59:15.226297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.835 [2024-07-15 13:59:15.226313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.238127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.835 [2024-07-15 13:59:15.238380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.835 [2024-07-15 13:59:15.238397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.250204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.835 [2024-07-15 13:59:15.250582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.835 [2024-07-15 13:59:15.250597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.262337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.835 [2024-07-15 13:59:15.262712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.835 [2024-07-15 13:59:15.262727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.274402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.835 [2024-07-15 13:59:15.274795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.835 [2024-07-15 13:59:15.274810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.286597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.835 [2024-07-15 13:59:15.287001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.835 [2024-07-15 13:59:15.287016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.298684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.835 [2024-07-15 13:59:15.299048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.835 [2024-07-15 13:59:15.299063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.310776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.835 [2024-07-15 13:59:15.311245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.835 [2024-07-15 13:59:15.311260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.322834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.835 [2024-07-15 13:59:15.323225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.835 [2024-07-15 13:59:15.323240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.835 [2024-07-15 13:59:15.334937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.836 [2024-07-15 13:59:15.335327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.836 [2024-07-15 13:59:15.335341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.836 [2024-07-15 13:59:15.347034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.836 [2024-07-15 13:59:15.347509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.836 [2024-07-15 13:59:15.347525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:48.836 [2024-07-15 13:59:15.359109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:48.836 [2024-07-15 13:59:15.359579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.836 [2024-07-15 13:59:15.359594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.097 [2024-07-15 13:59:15.371236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.097 [2024-07-15 13:59:15.371607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.097 [2024-07-15 13:59:15.371622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.097 [2024-07-15 13:59:15.383374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.097 [2024-07-15 13:59:15.383654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.097 [2024-07-15 13:59:15.383669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.097 [2024-07-15 13:59:15.395480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.097 [2024-07-15 13:59:15.395867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.097 [2024-07-15 13:59:15.395881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.097 [2024-07-15 13:59:15.407630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.097 [2024-07-15 13:59:15.407881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.097 [2024-07-15 13:59:15.407895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.097 [2024-07-15 13:59:15.419739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.097 [2024-07-15 13:59:15.420160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.097 [2024-07-15 13:59:15.420175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.097 [2024-07-15 13:59:15.431831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.097 [2024-07-15 13:59:15.432082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.097 [2024-07-15 13:59:15.432096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.097 [2024-07-15 13:59:15.443896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.097 [2024-07-15 13:59:15.444342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.097 [2024-07-15 13:59:15.444357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.097 [2024-07-15 13:59:15.455979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.097 [2024-07-15 13:59:15.456354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.098 [2024-07-15 13:59:15.456369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.098 [2024-07-15 13:59:15.468172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.098 [2024-07-15 13:59:15.468523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.098 [2024-07-15 13:59:15.468538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.098 [2024-07-15 13:59:15.480239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.098 [2024-07-15 13:59:15.480632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.098 [2024-07-15 13:59:15.480647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.098 [2024-07-15 13:59:15.492290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.098 [2024-07-15 13:59:15.492646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.098 [2024-07-15 13:59:15.492661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.098 [2024-07-15 13:59:15.504405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.098 [2024-07-15 13:59:15.504855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.098 [2024-07-15 13:59:15.504871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.098 [2024-07-15 13:59:15.516544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.098 [2024-07-15 13:59:15.516912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.098 [2024-07-15 13:59:15.516926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.098 [2024-07-15 13:59:15.528602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.098 [2024-07-15 13:59:15.528920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.098 [2024-07-15 13:59:15.528934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.098 [2024-07-15 13:59:15.540682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.098 [2024-07-15 13:59:15.541075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.098 [2024-07-15 13:59:15.541090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.098 [2024-07-15 13:59:15.552793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.098 [2024-07-15 13:59:15.553201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.098 [2024-07-15 13:59:15.553218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.098 [2024-07-15 13:59:15.564923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.098 [2024-07-15 13:59:15.565325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.098 [2024-07-15 13:59:15.565340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.098 [2024-07-15 13:59:15.577033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.098 [2024-07-15 13:59:15.577299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.098 [2024-07-15 13:59:15.577314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.098 [2024-07-15 13:59:15.589177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.098 [2024-07-15 13:59:15.589553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.098 [2024-07-15 13:59:15.589568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.098 [2024-07-15 13:59:15.601317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.098 [2024-07-15 13:59:15.601667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.098 [2024-07-15 13:59:15.601682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.098 [2024-07-15 13:59:15.613421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.098 [2024-07-15 13:59:15.613810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.098 [2024-07-15 13:59:15.613825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.358 [2024-07-15 13:59:15.625530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.625981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.625996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.637625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.637896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.637911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.649749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.650210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.650225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.661882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.662303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.662318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.673975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.674321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.674336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.686110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.686386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.686400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.698236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.698616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.698631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.710335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.710634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.710649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.722455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.722906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.722921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.734572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.734831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.734845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.746690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.747163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.747178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.758769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.759034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.759049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.770858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.771230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.771245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.782967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.783244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.783259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.795116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.795529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.795544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.807286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.807743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.807758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.819444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.819804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.819819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.831578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.831970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.831985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.843686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.843936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.843951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.855778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.856183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.856198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.867842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.868215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.868230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.359 [2024-07-15 13:59:15.880014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.359 [2024-07-15 13:59:15.880266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.359 [2024-07-15 13:59:15.880281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.620 [2024-07-15 13:59:15.892134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.620 [2024-07-15 13:59:15.892580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.620 [2024-07-15 13:59:15.892596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.620 [2024-07-15 13:59:15.904185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.620 [2024-07-15 13:59:15.904560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.620 [2024-07-15 13:59:15.904575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.620 [2024-07-15 13:59:15.916338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.620 [2024-07-15 13:59:15.916589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.620 [2024-07-15 13:59:15.916603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.620 [2024-07-15 13:59:15.928468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.620 [2024-07-15 13:59:15.928855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.621 [2024-07-15 13:59:15.928870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.621 [2024-07-15 13:59:15.940583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.621 [2024-07-15 13:59:15.941048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.621 [2024-07-15 13:59:15.941063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.621 [2024-07-15 13:59:15.952622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.621 [2024-07-15 13:59:15.953027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.621 [2024-07-15 13:59:15.953042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.621 [2024-07-15 13:59:15.964829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.621 [2024-07-15 13:59:15.965215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.621 [2024-07-15 13:59:15.965231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.621 [2024-07-15 13:59:15.976975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.621 [2024-07-15 13:59:15.977228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.621 [2024-07-15 13:59:15.977245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.621 [2024-07-15 13:59:15.989081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.621 [2024-07-15 13:59:15.989537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.621 [2024-07-15 13:59:15.989552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.621 [2024-07-15 13:59:16.001156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.621 [2024-07-15 13:59:16.001563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.621 [2024-07-15 13:59:16.001578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.621 [2024-07-15 13:59:16.013335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.621 [2024-07-15 13:59:16.013728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.621 [2024-07-15 13:59:16.013743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.621 [2024-07-15 13:59:16.025378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.621 [2024-07-15 13:59:16.025740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.621 [2024-07-15 13:59:16.025754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.621 [2024-07-15 13:59:16.037567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.621 [2024-07-15 13:59:16.038005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.621 [2024-07-15 13:59:16.038020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.621 [2024-07-15 13:59:16.049683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.621 [2024-07-15 13:59:16.050071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.621 [2024-07-15 13:59:16.050085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.621 [2024-07-15 13:59:16.061758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.621 [2024-07-15 13:59:16.062173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.621 [2024-07-15 13:59:16.062188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.621 [2024-07-15 13:59:16.073896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.621 [2024-07-15 13:59:16.074264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.621 [2024-07-15 13:59:16.074279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.621 [2024-07-15 13:59:16.086161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.621 [2024-07-15 13:59:16.086576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.621 [2024-07-15 13:59:16.086590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.621 [2024-07-15 13:59:16.098315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.621 [2024-07-15 13:59:16.098713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.621 [2024-07-15 13:59:16.098728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.621 [2024-07-15 13:59:16.110384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.621 [2024-07-15 13:59:16.110737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.621 [2024-07-15 13:59:16.110751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.621 [2024-07-15 13:59:16.122526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.621 [2024-07-15 13:59:16.122845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.621 [2024-07-15 13:59:16.122859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.621 [2024-07-15 13:59:16.134650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.621 [2024-07-15 13:59:16.135108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.621 [2024-07-15 13:59:16.135127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.882 [2024-07-15 13:59:16.146753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.882 [2024-07-15 13:59:16.147202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.882 [2024-07-15 13:59:16.147217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.882 [2024-07-15 13:59:16.158856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.882 [2024-07-15 13:59:16.159135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.882 [2024-07-15 13:59:16.159150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.882 [2024-07-15 13:59:16.170952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x144baa0) with pdu=0x2000190fdeb0 00:28:49.882 [2024-07-15 13:59:16.171203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.882 [2024-07-15 13:59:16.171216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:49.882 00:28:49.882 Latency(us) 00:28:49.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.882 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:49.882 nvme0n1 : 2.01 21028.10 82.14 0.00 0.00 6075.47 5324.80 12397.23 00:28:49.882 =================================================================================================================== 00:28:49.882 Total : 21028.10 82.14 0.00 0.00 6075.47 5324.80 12397.23 00:28:49.882 0 00:28:49.882 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:49.882 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:49.882 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:49.882 | .driver_specific 00:28:49.882 | .nvme_error 00:28:49.882 | .status_code 00:28:49.882 | .command_transient_transport_error' 00:28:49.882 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:49.882 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 165 > 0 )) 00:28:49.882 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1264691 00:28:49.882 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1264691 ']' 00:28:49.882 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1264691 00:28:49.882 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:49.882 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:49.882 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1264691 00:28:50.142 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:50.142 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:50.142 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1264691' 00:28:50.142 killing process with pid 1264691 00:28:50.142 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1264691 00:28:50.142 Received shutdown signal, test time was about 2.000000 seconds 00:28:50.142 00:28:50.142 Latency(us) 00:28:50.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.142 =================================================================================================================== 00:28:50.142 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:50.142 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1264691 00:28:50.142 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:50.142 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:50.142 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:50.142 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:50.142 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:50.142 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1265435 00:28:50.142 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1265435 /var/tmp/bperf.sock 00:28:50.142 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1265435 ']' 00:28:50.142 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:50.142 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:50.142 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:50.142 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:50.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:50.142 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:50.142 13:59:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:50.142 [2024-07-15 13:59:16.581400] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:28:50.142 [2024-07-15 13:59:16.581457] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265435 ] 00:28:50.142 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:50.142 Zero copy mechanism will not be used. 00:28:50.142 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.142 [2024-07-15 13:59:16.654736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.403 [2024-07-15 13:59:16.707960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.975 13:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:50.975 13:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:50.975 13:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:50.975 13:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:50.975 13:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:50.975 13:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.975 13:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:51.249 13:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.249 13:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.249 13:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.529 nvme0n1 00:28:51.529 13:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:51.529 13:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.529 13:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:51.529 13:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.529 13:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:51.529 13:59:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:51.529 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:51.529 Zero copy mechanism will not be used. 00:28:51.529 Running I/O for 2 seconds... 00:28:51.529 [2024-07-15 13:59:18.006906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.529 [2024-07-15 13:59:18.007159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.529 [2024-07-15 13:59:18.007185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.529 [2024-07-15 13:59:18.014843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.529 [2024-07-15 13:59:18.015062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.529 [2024-07-15 13:59:18.015080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.529 [2024-07-15 13:59:18.022757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.529 [2024-07-15 13:59:18.022862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.529 [2024-07-15 13:59:18.022877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.529 [2024-07-15 13:59:18.033239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.529 [2024-07-15 13:59:18.033519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.529 [2024-07-15 13:59:18.033538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.529 [2024-07-15 13:59:18.042287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.529 [2024-07-15 13:59:18.042427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.529 [2024-07-15 13:59:18.042443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.529 [2024-07-15 13:59:18.050313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.529 [2024-07-15 13:59:18.050526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.529 [2024-07-15 13:59:18.050543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.790 [2024-07-15 13:59:18.060606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.790 [2024-07-15 13:59:18.060946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.790 [2024-07-15 13:59:18.060963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.790 [2024-07-15 13:59:18.070420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.790 [2024-07-15 13:59:18.070770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.790 [2024-07-15 13:59:18.070787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.790 [2024-07-15 13:59:18.080245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.790 [2024-07-15 13:59:18.080595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.790 [2024-07-15 13:59:18.080612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.790 [2024-07-15 13:59:18.089745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.790 [2024-07-15 13:59:18.089969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.790 [2024-07-15 13:59:18.089985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.790 [2024-07-15 13:59:18.099828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.790 [2024-07-15 13:59:18.099920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.790 [2024-07-15 13:59:18.099938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.790 [2024-07-15 13:59:18.109906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.790 [2024-07-15 13:59:18.110243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.790 [2024-07-15 13:59:18.110260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.790 [2024-07-15 13:59:18.119927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.790 [2024-07-15 13:59:18.120013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.790 [2024-07-15 13:59:18.120028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.790 [2024-07-15 13:59:18.128852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.790 [2024-07-15 13:59:18.128921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.790 [2024-07-15 13:59:18.128936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.791 [2024-07-15 13:59:18.135905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.791 [2024-07-15 13:59:18.136237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.791 [2024-07-15 13:59:18.136253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.791 [2024-07-15 13:59:18.145406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.791 [2024-07-15 13:59:18.145763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.791 [2024-07-15 13:59:18.145779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.791 [2024-07-15 13:59:18.156371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.791 [2024-07-15 13:59:18.156713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.791 [2024-07-15 13:59:18.156730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.791 [2024-07-15 13:59:18.167775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.791 [2024-07-15 13:59:18.168089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.791 [2024-07-15 13:59:18.168106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.791 [2024-07-15 13:59:18.179110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.791 [2024-07-15 13:59:18.179438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.791 [2024-07-15 13:59:18.179454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.791 [2024-07-15 13:59:18.190019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.791 [2024-07-15 13:59:18.190244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.791 [2024-07-15 13:59:18.190261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.791 [2024-07-15 13:59:18.200951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.791 [2024-07-15 13:59:18.201229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.791 [2024-07-15 13:59:18.201246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.791 [2024-07-15 13:59:18.212049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.791 [2024-07-15 13:59:18.212267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.791 [2024-07-15 13:59:18.212282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.791 [2024-07-15 13:59:18.222169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.791 [2024-07-15 13:59:18.222391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.791 [2024-07-15 13:59:18.222407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.791 [2024-07-15 13:59:18.232767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.791 [2024-07-15 13:59:18.233016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.791 [2024-07-15 13:59:18.233033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.791 [2024-07-15 13:59:18.243508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.791 [2024-07-15 13:59:18.243807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.791 [2024-07-15 13:59:18.243824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.791 [2024-07-15 13:59:18.253851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.791 [2024-07-15 13:59:18.254118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.791 [2024-07-15 13:59:18.254140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.791 [2024-07-15 13:59:18.264380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.791 [2024-07-15 13:59:18.264678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.791 [2024-07-15 13:59:18.264694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.791 [2024-07-15 13:59:18.274569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.791 [2024-07-15 13:59:18.274770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.791 [2024-07-15 13:59:18.274789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.791 [2024-07-15 13:59:18.282801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.791 [2024-07-15 13:59:18.283017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.791 [2024-07-15 13:59:18.283033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:51.791 [2024-07-15 13:59:18.291969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.791 [2024-07-15 13:59:18.292313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.791 [2024-07-15 13:59:18.292330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:51.791 [2024-07-15 13:59:18.300115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.791 [2024-07-15 13:59:18.300379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.791 [2024-07-15 13:59:18.300395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:51.791 [2024-07-15 13:59:18.307531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:51.791 [2024-07-15 13:59:18.307817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.791 [2024-07-15 13:59:18.307833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.052 [2024-07-15 13:59:18.316074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.052 [2024-07-15 13:59:18.316407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.052 [2024-07-15 13:59:18.316424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.052 [2024-07-15 13:59:18.323545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.052 [2024-07-15 13:59:18.323898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.052 [2024-07-15 13:59:18.323914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.052 [2024-07-15 13:59:18.330660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.052 [2024-07-15 13:59:18.330860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.052 [2024-07-15 13:59:18.330876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.052 [2024-07-15 13:59:18.338598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.052 [2024-07-15 13:59:18.338950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.052 [2024-07-15 13:59:18.338966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.346234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.346456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.346472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.354816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.355182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.355199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.360596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.360905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.360922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.366696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.366798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.366813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.373985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.374062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.374077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.381337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.381484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.381499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.391008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.391092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.391107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.398736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.398825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.398839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.406969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.407034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.407049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.414484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.414557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.414572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.421154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.421250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.421270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.428402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.428540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.428555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.435959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.436070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.436085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.444722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.444787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.444801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.454204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.454328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.454343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.461459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.461606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.461620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.470447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.470538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.470553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.477077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.477189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.477213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.484034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.484112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.484131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.493340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.493442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.493457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.502436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.502595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.502610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.511488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.511632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.511647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.522237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.522507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.522522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.530731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.530835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.530850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.538312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.538378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.538393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.544810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.544902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.544917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.551165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.551244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.551259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.559549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.559716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.053 [2024-07-15 13:59:18.559732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.053 [2024-07-15 13:59:18.569077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.053 [2024-07-15 13:59:18.569344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.054 [2024-07-15 13:59:18.569359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.054 [2024-07-15 13:59:18.576347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.054 [2024-07-15 13:59:18.576460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.054 [2024-07-15 13:59:18.576475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.315 [2024-07-15 13:59:18.585477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.315 [2024-07-15 13:59:18.585607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.315 [2024-07-15 13:59:18.585621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.315 [2024-07-15 13:59:18.592926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.315 [2024-07-15 13:59:18.593113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.315 [2024-07-15 13:59:18.593133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.315 [2024-07-15 13:59:18.601668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.315 [2024-07-15 13:59:18.601785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.315 [2024-07-15 13:59:18.601800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.315 [2024-07-15 13:59:18.608637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.315 [2024-07-15 13:59:18.608732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.315 [2024-07-15 13:59:18.608747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.315 [2024-07-15 13:59:18.615697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.315 [2024-07-15 13:59:18.615791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.315 [2024-07-15 13:59:18.615811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.315 [2024-07-15 13:59:18.623870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.315 [2024-07-15 13:59:18.623964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.315 [2024-07-15 13:59:18.623979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.315 [2024-07-15 13:59:18.631827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.315 [2024-07-15 13:59:18.631935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.315 [2024-07-15 13:59:18.631950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.315 [2024-07-15 13:59:18.638188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.315 [2024-07-15 13:59:18.638294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.315 [2024-07-15 13:59:18.638310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.315 [2024-07-15 13:59:18.646571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.315 [2024-07-15 13:59:18.646664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.315 [2024-07-15 13:59:18.646678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.315 [2024-07-15 13:59:18.653954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.315 [2024-07-15 13:59:18.654100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.315 [2024-07-15 13:59:18.654114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.315 [2024-07-15 13:59:18.662812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.315 [2024-07-15 13:59:18.662960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.315 [2024-07-15 13:59:18.662975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.315 [2024-07-15 13:59:18.671819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.315 [2024-07-15 13:59:18.672046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.315 [2024-07-15 13:59:18.672061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.315 [2024-07-15 13:59:18.678136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.315 [2024-07-15 13:59:18.678250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.315 [2024-07-15 13:59:18.678270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.315 [2024-07-15 13:59:18.687036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.315 [2024-07-15 13:59:18.687181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.315 [2024-07-15 13:59:18.687202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.315 [2024-07-15 13:59:18.695165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.315 [2024-07-15 13:59:18.695271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.316 [2024-07-15 13:59:18.695286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.316 [2024-07-15 13:59:18.702531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.316 [2024-07-15 13:59:18.702655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.316 [2024-07-15 13:59:18.702670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.316 [2024-07-15 13:59:18.708814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.316 [2024-07-15 13:59:18.708959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.316 [2024-07-15 13:59:18.708974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.316 [2024-07-15 13:59:18.714408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.316 [2024-07-15 13:59:18.714485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.316 [2024-07-15 13:59:18.714503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.316 [2024-07-15 13:59:18.720227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.316 [2024-07-15 13:59:18.720370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.316 [2024-07-15 13:59:18.720385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.316 [2024-07-15 13:59:18.726262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.316 [2024-07-15 13:59:18.726351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.316 [2024-07-15 13:59:18.726365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.316 [2024-07-15 13:59:18.733332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.316 [2024-07-15 13:59:18.733479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.316 [2024-07-15 13:59:18.733494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.316 [2024-07-15 13:59:18.742353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.316 [2024-07-15 13:59:18.742479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.316 [2024-07-15 13:59:18.742493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.316 [2024-07-15 13:59:18.751525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.316 [2024-07-15 13:59:18.751831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.316 [2024-07-15 13:59:18.751847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.316 [2024-07-15 13:59:18.761321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.316 [2024-07-15 13:59:18.761453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.316 [2024-07-15 13:59:18.761467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.316 [2024-07-15 13:59:18.771019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.316 [2024-07-15 13:59:18.771132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.316 [2024-07-15 13:59:18.771148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.316 [2024-07-15 13:59:18.780462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.316 [2024-07-15 13:59:18.780577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.316 [2024-07-15 13:59:18.780592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.316 [2024-07-15 13:59:18.790448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.316 [2024-07-15 13:59:18.790542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.316 [2024-07-15 13:59:18.790556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.316 [2024-07-15 13:59:18.799932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.316 [2024-07-15 13:59:18.800053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.316 [2024-07-15 13:59:18.800068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.316 [2024-07-15 13:59:18.808520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.316 [2024-07-15 13:59:18.808623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.316 [2024-07-15 13:59:18.808638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.316 [2024-07-15 13:59:18.818310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.316 [2024-07-15 13:59:18.818566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.316 [2024-07-15 13:59:18.818580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.316 [2024-07-15 13:59:18.828104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.316 [2024-07-15 13:59:18.828242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.316 [2024-07-15 13:59:18.828265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.316 [2024-07-15 13:59:18.838211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.316 [2024-07-15 13:59:18.838355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.316 [2024-07-15 13:59:18.838370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.578 [2024-07-15 13:59:18.847937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.578 [2024-07-15 13:59:18.848050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.578 [2024-07-15 13:59:18.848067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.578 [2024-07-15 13:59:18.857015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.578 [2024-07-15 13:59:18.857118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.578 [2024-07-15 13:59:18.857137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.578 [2024-07-15 13:59:18.865247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.578 [2024-07-15 13:59:18.865334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.578 [2024-07-15 13:59:18.865349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.578 [2024-07-15 13:59:18.873887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.578 [2024-07-15 13:59:18.874002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.578 [2024-07-15 13:59:18.874017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.578 [2024-07-15 13:59:18.884112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.578 [2024-07-15 13:59:18.884192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.578 [2024-07-15 13:59:18.884207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.578 [2024-07-15 13:59:18.893260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.578 [2024-07-15 13:59:18.893342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.578 [2024-07-15 13:59:18.893358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.578 [2024-07-15 13:59:18.902232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.578 [2024-07-15 13:59:18.902316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.578 [2024-07-15 13:59:18.902331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.578 [2024-07-15 13:59:18.910642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.578 [2024-07-15 13:59:18.910906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:18.910921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:18.920496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:18.920590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:18.920606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:18.928169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:18.928253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:18.928268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:18.934878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:18.934984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:18.934999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:18.945016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:18.945098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:18.945113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:18.953062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:18.953174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:18.953189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:18.960812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:18.960939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:18.960955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:18.970286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:18.970464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:18.970481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:18.978061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:18.978325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:18.978340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:18.986900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:18.987109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:18.987128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:18.995120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:18.995234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:18.995249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:19.002195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:19.002296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:19.002311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:19.011086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:19.011289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:19.011304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:19.018164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:19.018325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:19.018339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:19.027942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:19.028006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:19.028020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:19.037116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:19.037196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:19.037211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:19.045128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:19.045195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:19.045210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:19.053861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:19.053958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:19.053975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:19.061857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:19.061964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:19.061979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:19.070748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:19.070832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:19.070847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.579 [2024-07-15 13:59:19.079596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.579 [2024-07-15 13:59:19.079710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.579 [2024-07-15 13:59:19.079725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.580 [2024-07-15 13:59:19.087572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.580 [2024-07-15 13:59:19.087654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.580 [2024-07-15 13:59:19.087670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.580 [2024-07-15 13:59:19.095064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.580 [2024-07-15 13:59:19.095139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.580 [2024-07-15 13:59:19.095154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.103418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.103567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.103586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.110356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.110426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.110441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.116969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.117054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.117069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.123960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.124210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.124224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.132888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.133127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.133143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.139276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.139383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.139398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.145813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.145909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.145924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.151103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.151278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.151294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.158416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.158525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.158540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.167428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.167558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.167573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.174376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.174481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.174496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.181386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.181451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.181466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.189367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.189615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.189630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.197252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.197369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.197385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.203202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.203306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.203321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.211271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.211383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.211398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.218651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.218790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.218804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.226201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.226291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.226309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.233464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.233556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.233571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.241913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.242013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.242031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.250545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.250673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.250694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.256333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.256529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.256543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.264522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.264631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.264646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.272166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.272327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.272343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.278315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.842 [2024-07-15 13:59:19.278528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.842 [2024-07-15 13:59:19.278543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.842 [2024-07-15 13:59:19.287022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.843 [2024-07-15 13:59:19.287085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.843 [2024-07-15 13:59:19.287100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.843 [2024-07-15 13:59:19.296388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.843 [2024-07-15 13:59:19.296534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.843 [2024-07-15 13:59:19.296549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.843 [2024-07-15 13:59:19.306346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.843 [2024-07-15 13:59:19.306435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.843 [2024-07-15 13:59:19.306450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.843 [2024-07-15 13:59:19.315298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.843 [2024-07-15 13:59:19.315385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.843 [2024-07-15 13:59:19.315399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.843 [2024-07-15 13:59:19.322673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.843 [2024-07-15 13:59:19.322776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.843 [2024-07-15 13:59:19.322790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.843 [2024-07-15 13:59:19.329459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.843 [2024-07-15 13:59:19.329666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.843 [2024-07-15 13:59:19.329681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:52.843 [2024-07-15 13:59:19.337622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.843 [2024-07-15 13:59:19.337887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.843 [2024-07-15 13:59:19.337903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.843 [2024-07-15 13:59:19.344954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.843 [2024-07-15 13:59:19.345067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.843 [2024-07-15 13:59:19.345082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:52.843 [2024-07-15 13:59:19.353136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.843 [2024-07-15 13:59:19.353253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.843 [2024-07-15 13:59:19.353267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:52.843 [2024-07-15 13:59:19.360792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:52.843 [2024-07-15 13:59:19.360880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.843 [2024-07-15 13:59:19.360900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.368791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.368889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.368904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.375177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.375339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.375354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.382163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.382327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.382344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.389578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.389716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.389731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.397713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.397801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.397819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.405484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.405566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.405581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.413089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.413244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.413261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.418842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.418915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.418930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.425009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.425128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.425143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.430921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.431061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.431075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.440526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.440633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.440648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.448077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.448159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.448180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.456475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.456597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.456612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.462742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.462879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.462896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.467679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.467789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.467804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.472736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.472837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.472853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.478716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.478799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.478815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.485470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.485577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.485592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.496025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.496189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.496204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.505706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.505787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.505802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.515956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.516070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.516085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.526719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.105 [2024-07-15 13:59:19.526830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.105 [2024-07-15 13:59:19.526846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.105 [2024-07-15 13:59:19.536099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.106 [2024-07-15 13:59:19.536207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-07-15 13:59:19.536222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.106 [2024-07-15 13:59:19.545609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.106 [2024-07-15 13:59:19.545886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-07-15 13:59:19.545900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.106 [2024-07-15 13:59:19.555723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.106 [2024-07-15 13:59:19.555886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-07-15 13:59:19.555901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.106 [2024-07-15 13:59:19.565233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.106 [2024-07-15 13:59:19.565329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-07-15 13:59:19.565344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.106 [2024-07-15 13:59:19.575078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.106 [2024-07-15 13:59:19.575196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-07-15 13:59:19.575211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.106 [2024-07-15 13:59:19.582630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.106 [2024-07-15 13:59:19.582694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-07-15 13:59:19.582709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.106 [2024-07-15 13:59:19.590830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.106 [2024-07-15 13:59:19.590952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-07-15 13:59:19.590970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.106 [2024-07-15 13:59:19.598207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.106 [2024-07-15 13:59:19.598309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-07-15 13:59:19.598323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.106 [2024-07-15 13:59:19.606686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.106 [2024-07-15 13:59:19.606773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-07-15 13:59:19.606787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.106 [2024-07-15 13:59:19.614336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.106 [2024-07-15 13:59:19.614431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-07-15 13:59:19.614446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.106 [2024-07-15 13:59:19.622879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.106 [2024-07-15 13:59:19.622943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.106 [2024-07-15 13:59:19.622958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.631197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.631277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.631292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.639624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.639740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.639755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.648058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.648128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.648143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.655607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.655710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.655726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.664657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.664725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.664740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.673631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.673760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.673775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.682025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.682222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.682237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.693423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.693693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.693710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.703622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.703728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.703743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.714206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.714353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.714368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.723733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.723829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.723845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.733828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.734071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.734086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.742821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.742915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.742929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.751989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.752054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.752069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.761156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.761278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.761293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.769914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.769984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.769999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.778901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.778978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.778992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.787704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.787912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.787926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.796481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.796626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.796641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.805567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.805647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.805661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.811887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.811950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.811965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.819650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.819712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.819734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.826886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.368 [2024-07-15 13:59:19.826985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.368 [2024-07-15 13:59:19.826999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.368 [2024-07-15 13:59:19.836963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.369 [2024-07-15 13:59:19.837240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-07-15 13:59:19.837256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.369 [2024-07-15 13:59:19.846278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.369 [2024-07-15 13:59:19.846365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-07-15 13:59:19.846380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.369 [2024-07-15 13:59:19.855469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.369 [2024-07-15 13:59:19.855714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-07-15 13:59:19.855738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.369 [2024-07-15 13:59:19.863449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.369 [2024-07-15 13:59:19.863528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-07-15 13:59:19.863543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.369 [2024-07-15 13:59:19.872195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.369 [2024-07-15 13:59:19.872315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-07-15 13:59:19.872331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.369 [2024-07-15 13:59:19.881315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.369 [2024-07-15 13:59:19.881484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-07-15 13:59:19.881499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.369 [2024-07-15 13:59:19.888907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.369 [2024-07-15 13:59:19.888979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.369 [2024-07-15 13:59:19.888994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.631 [2024-07-15 13:59:19.894141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.631 [2024-07-15 13:59:19.894215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-07-15 13:59:19.894230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.631 [2024-07-15 13:59:19.898210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.631 [2024-07-15 13:59:19.898285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-07-15 13:59:19.898299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.631 [2024-07-15 13:59:19.902563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.631 [2024-07-15 13:59:19.902681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-07-15 13:59:19.902696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.631 [2024-07-15 13:59:19.907551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.631 [2024-07-15 13:59:19.907685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-07-15 13:59:19.907700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.631 [2024-07-15 13:59:19.912164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.631 [2024-07-15 13:59:19.912330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-07-15 13:59:19.912345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.631 [2024-07-15 13:59:19.916397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.631 [2024-07-15 13:59:19.916503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-07-15 13:59:19.916518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.631 [2024-07-15 13:59:19.921600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.631 [2024-07-15 13:59:19.921729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-07-15 13:59:19.921744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.631 [2024-07-15 13:59:19.926296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.631 [2024-07-15 13:59:19.926390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-07-15 13:59:19.926405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.631 [2024-07-15 13:59:19.931570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.631 [2024-07-15 13:59:19.931634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-07-15 13:59:19.931649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.631 [2024-07-15 13:59:19.938218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.631 [2024-07-15 13:59:19.938329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-07-15 13:59:19.938344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.631 [2024-07-15 13:59:19.943505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.631 [2024-07-15 13:59:19.943601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-07-15 13:59:19.943622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.631 [2024-07-15 13:59:19.951796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.631 [2024-07-15 13:59:19.951871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-07-15 13:59:19.951886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.631 [2024-07-15 13:59:19.957967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.631 [2024-07-15 13:59:19.958114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-07-15 13:59:19.958136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.631 [2024-07-15 13:59:19.963656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.631 [2024-07-15 13:59:19.963835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-07-15 13:59:19.963849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.631 [2024-07-15 13:59:19.969973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.631 [2024-07-15 13:59:19.970098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-07-15 13:59:19.970113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.631 [2024-07-15 13:59:19.976635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.631 [2024-07-15 13:59:19.976781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-07-15 13:59:19.976799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.631 [2024-07-15 13:59:19.982613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.631 [2024-07-15 13:59:19.982762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-07-15 13:59:19.982777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.631 [2024-07-15 13:59:19.990274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.631 [2024-07-15 13:59:19.990358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-07-15 13:59:19.990383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.631 [2024-07-15 13:59:19.997567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1540c80) with pdu=0x2000190fef90 00:28:53.631 [2024-07-15 13:59:19.997668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.631 [2024-07-15 13:59:19.997685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.631 00:28:53.631 Latency(us) 00:28:53.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.631 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:53.631 nvme0n1 : 2.00 3804.86 475.61 0.00 0.00 4198.70 1911.47 11632.64 00:28:53.631 =================================================================================================================== 00:28:53.631 Total : 3804.86 475.61 0.00 0.00 4198.70 1911.47 11632.64 00:28:53.631 0 00:28:53.631 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:53.631 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:53.631 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:53.631 | .driver_specific 00:28:53.631 | .nvme_error 00:28:53.631 | .status_code 00:28:53.631 | .command_transient_transport_error' 00:28:53.631 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 245 > 0 )) 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1265435 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1265435 ']' 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1265435 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1265435 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1265435' 00:28:53.893 killing process with pid 1265435 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1265435 00:28:53.893 Received shutdown signal, test time was about 2.000000 seconds 00:28:53.893 00:28:53.893 Latency(us) 00:28:53.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.893 =================================================================================================================== 00:28:53.893 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1265435 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1263034 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1263034 ']' 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1263034 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1263034 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1263034' 00:28:53.893 killing process with pid 1263034 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1263034 00:28:53.893 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1263034 00:28:54.154 00:28:54.154 real 0m16.344s 00:28:54.154 user 0m32.072s 00:28:54.154 sys 0m3.241s 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:54.154 ************************************ 00:28:54.154 END TEST nvmf_digest_error 00:28:54.154 ************************************ 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:54.154 rmmod nvme_tcp 00:28:54.154 rmmod nvme_fabrics 00:28:54.154 rmmod nvme_keyring 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1263034 ']' 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1263034 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1263034 ']' 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1263034 00:28:54.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1263034) - No such process 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1263034 is not found' 00:28:54.154 Process with pid 1263034 is not found 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:54.154 13:59:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.698 13:59:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:56.698 00:28:56.698 real 0m42.041s 00:28:56.698 user 1m5.787s 00:28:56.698 sys 0m11.925s 00:28:56.698 13:59:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:56.698 13:59:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:56.698 ************************************ 00:28:56.698 END TEST nvmf_digest 00:28:56.698 ************************************ 00:28:56.698 13:59:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:56.698 13:59:22 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:28:56.698 13:59:22 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:28:56.698 13:59:22 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:28:56.698 13:59:22 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:56.698 13:59:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:56.698 13:59:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:56.698 13:59:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:56.698 ************************************ 00:28:56.698 START TEST nvmf_bdevperf 00:28:56.698 ************************************ 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:56.698 * Looking for test storage... 00:28:56.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.698 13:59:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:56.699 13:59:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.699 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:56.699 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:56.699 13:59:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:56.699 13:59:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:04.845 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:04.845 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:04.845 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:04.845 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.845 13:59:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.845 13:59:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:04.845 13:59:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.845 13:59:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.845 13:59:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.845 13:59:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:04.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:29:04.846 00:29:04.846 --- 10.0.0.2 ping statistics --- 00:29:04.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.846 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:29:04.846 00:29:04.846 --- 10.0.0.1 ping statistics --- 00:29:04.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.846 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1270175 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1270175 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1270175 ']' 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:04.846 13:59:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.846 [2024-07-15 13:59:30.260800] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:04.846 [2024-07-15 13:59:30.260859] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.846 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.846 [2024-07-15 13:59:30.346615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:04.846 [2024-07-15 13:59:30.417373] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.846 [2024-07-15 13:59:30.417416] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.846 [2024-07-15 13:59:30.417424] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:04.846 [2024-07-15 13:59:30.417430] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:04.846 [2024-07-15 13:59:30.417436] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.846 [2024-07-15 13:59:30.417551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:04.846 [2024-07-15 13:59:30.417712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.846 [2024-07-15 13:59:30.417712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.846 [2024-07-15 13:59:31.053536] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.846 Malloc0 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:04.846 [2024-07-15 13:59:31.115466] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:04.846 { 00:29:04.846 "params": { 00:29:04.846 "name": "Nvme$subsystem", 00:29:04.846 "trtype": "$TEST_TRANSPORT", 00:29:04.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.846 "adrfam": "ipv4", 00:29:04.846 "trsvcid": "$NVMF_PORT", 00:29:04.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.846 "hdgst": ${hdgst:-false}, 00:29:04.846 "ddgst": ${ddgst:-false} 00:29:04.846 }, 00:29:04.846 "method": "bdev_nvme_attach_controller" 00:29:04.846 } 00:29:04.846 EOF 00:29:04.846 )") 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:04.846 13:59:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:04.846 "params": { 00:29:04.846 "name": "Nvme1", 00:29:04.846 "trtype": "tcp", 00:29:04.846 "traddr": "10.0.0.2", 00:29:04.846 "adrfam": "ipv4", 00:29:04.846 "trsvcid": "4420", 00:29:04.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:04.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:04.846 "hdgst": false, 00:29:04.846 "ddgst": false 00:29:04.846 }, 00:29:04.846 "method": "bdev_nvme_attach_controller" 00:29:04.846 }' 00:29:04.846 [2024-07-15 13:59:31.169653] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:04.846 [2024-07-15 13:59:31.169699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1270504 ] 00:29:04.846 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.846 [2024-07-15 13:59:31.227160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.846 [2024-07-15 13:59:31.291760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.107 Running I/O for 1 seconds... 00:29:06.491 00:29:06.491 Latency(us) 00:29:06.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.491 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:06.491 Verification LBA range: start 0x0 length 0x4000 00:29:06.491 Nvme1n1 : 1.00 8921.24 34.85 0.00 0.00 14268.72 1597.44 17585.49 00:29:06.491 =================================================================================================================== 00:29:06.491 Total : 8921.24 34.85 0.00 0.00 14268.72 1597.44 17585.49 00:29:06.491 13:59:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1270837 00:29:06.491 13:59:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:06.491 13:59:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:06.492 13:59:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:06.492 13:59:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:06.492 13:59:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:06.492 13:59:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:06.492 13:59:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:06.492 { 00:29:06.492 "params": { 00:29:06.492 "name": "Nvme$subsystem", 00:29:06.492 "trtype": "$TEST_TRANSPORT", 00:29:06.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.492 "adrfam": "ipv4", 00:29:06.492 "trsvcid": "$NVMF_PORT", 00:29:06.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.492 "hdgst": ${hdgst:-false}, 00:29:06.492 "ddgst": ${ddgst:-false} 00:29:06.492 }, 00:29:06.492 "method": "bdev_nvme_attach_controller" 00:29:06.492 } 00:29:06.492 EOF 00:29:06.492 )") 00:29:06.492 13:59:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:06.492 13:59:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:06.492 13:59:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:06.492 13:59:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:06.492 "params": { 00:29:06.492 "name": "Nvme1", 00:29:06.492 "trtype": "tcp", 00:29:06.492 "traddr": "10.0.0.2", 00:29:06.492 "adrfam": "ipv4", 00:29:06.492 "trsvcid": "4420", 00:29:06.492 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:06.492 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:06.492 "hdgst": false, 00:29:06.492 "ddgst": false 00:29:06.492 }, 00:29:06.492 "method": "bdev_nvme_attach_controller" 00:29:06.492 }' 00:29:06.492 [2024-07-15 13:59:32.787302] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:06.492 [2024-07-15 13:59:32.787358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1270837 ] 00:29:06.492 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.492 [2024-07-15 13:59:32.846239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.492 [2024-07-15 13:59:32.909770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.752 Running I/O for 15 seconds... 00:29:09.298 13:59:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1270175 00:29:09.298 13:59:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:09.298 [2024-07-15 13:59:35.751938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.751979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.298 [2024-07-15 13:59:35.752571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.298 [2024-07-15 13:59:35.752581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.752985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.752995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.753002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.753011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.753018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.753028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.753035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.753044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.753051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.753060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.753067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.753076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.753083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.753093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.753100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.753109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.753117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.753133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.753140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.753150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.753157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.753167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.753174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.753183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.753191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.753200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.753207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.753216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.753223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.753234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.753241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.753250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.753257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.753267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.753274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.753284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.299 [2024-07-15 13:59:35.753291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.299 [2024-07-15 13:59:35.753300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.753990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.753997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.754006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.300 [2024-07-15 13:59:35.754013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.300 [2024-07-15 13:59:35.754023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.301 [2024-07-15 13:59:35.754031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.301 [2024-07-15 13:59:35.754040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.301 [2024-07-15 13:59:35.754047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.301 [2024-07-15 13:59:35.754056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.301 [2024-07-15 13:59:35.754063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.301 [2024-07-15 13:59:35.754073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.301 [2024-07-15 13:59:35.754080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.301 [2024-07-15 13:59:35.754090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.301 [2024-07-15 13:59:35.754097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.301 [2024-07-15 13:59:35.754106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.301 [2024-07-15 13:59:35.754113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.301 [2024-07-15 13:59:35.754211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.301 [2024-07-15 13:59:35.754220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.301 [2024-07-15 13:59:35.754229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.301 [2024-07-15 13:59:35.754236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.301 [2024-07-15 13:59:35.754246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.301 [2024-07-15 13:59:35.754253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.301 [2024-07-15 13:59:35.754261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34a00 is same with the state(5) to be set 00:29:09.301 [2024-07-15 13:59:35.754271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:09.301 [2024-07-15 13:59:35.754278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:09.301 [2024-07-15 13:59:35.754285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111024 len:8 PRP1 0x0 PRP2 0x0 00:29:09.301 [2024-07-15 13:59:35.754294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.301 [2024-07-15 13:59:35.754337] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc34a00 was disconnected and freed. reset controller. 00:29:09.301 [2024-07-15 13:59:35.757897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.301 [2024-07-15 13:59:35.757947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.301 [2024-07-15 13:59:35.758845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.301 [2024-07-15 13:59:35.758863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.301 [2024-07-15 13:59:35.758872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.301 [2024-07-15 13:59:35.759094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.301 [2024-07-15 13:59:35.759319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.301 [2024-07-15 13:59:35.759329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.301 [2024-07-15 13:59:35.759338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.301 [2024-07-15 13:59:35.762901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.301 [2024-07-15 13:59:35.772147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.301 [2024-07-15 13:59:35.772789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.301 [2024-07-15 13:59:35.772805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.301 [2024-07-15 13:59:35.772812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.301 [2024-07-15 13:59:35.773033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.301 [2024-07-15 13:59:35.773259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.301 [2024-07-15 13:59:35.773268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.301 [2024-07-15 13:59:35.773275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.301 [2024-07-15 13:59:35.776835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.301 [2024-07-15 13:59:35.786081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.301 [2024-07-15 13:59:35.786765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.301 [2024-07-15 13:59:35.786803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.301 [2024-07-15 13:59:35.786814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.301 [2024-07-15 13:59:35.787057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.301 [2024-07-15 13:59:35.787290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.301 [2024-07-15 13:59:35.787300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.301 [2024-07-15 13:59:35.787315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.301 [2024-07-15 13:59:35.790880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.301 [2024-07-15 13:59:35.799928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.301 [2024-07-15 13:59:35.800673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.301 [2024-07-15 13:59:35.800710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.301 [2024-07-15 13:59:35.800721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.301 [2024-07-15 13:59:35.800961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.301 [2024-07-15 13:59:35.801192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.301 [2024-07-15 13:59:35.801202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.301 [2024-07-15 13:59:35.801210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.301 [2024-07-15 13:59:35.804776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.301 [2024-07-15 13:59:35.813807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.301 [2024-07-15 13:59:35.814556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.301 [2024-07-15 13:59:35.814593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.301 [2024-07-15 13:59:35.814604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.301 [2024-07-15 13:59:35.814844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.301 [2024-07-15 13:59:35.815068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.301 [2024-07-15 13:59:35.815076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.301 [2024-07-15 13:59:35.815084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.301 [2024-07-15 13:59:35.818656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.563 [2024-07-15 13:59:35.827692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.563 [2024-07-15 13:59:35.828436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.563 [2024-07-15 13:59:35.828473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.563 [2024-07-15 13:59:35.828484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.563 [2024-07-15 13:59:35.828724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.563 [2024-07-15 13:59:35.828948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.563 [2024-07-15 13:59:35.828956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.563 [2024-07-15 13:59:35.828964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.563 [2024-07-15 13:59:35.832537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.563 [2024-07-15 13:59:35.841572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.563 [2024-07-15 13:59:35.842331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.563 [2024-07-15 13:59:35.842373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.563 [2024-07-15 13:59:35.842384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.563 [2024-07-15 13:59:35.842624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.563 [2024-07-15 13:59:35.842847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.563 [2024-07-15 13:59:35.842856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.563 [2024-07-15 13:59:35.842863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.563 [2024-07-15 13:59:35.846431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.563 [2024-07-15 13:59:35.855465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.563 [2024-07-15 13:59:35.856221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.563 [2024-07-15 13:59:35.856259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.563 [2024-07-15 13:59:35.856271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.563 [2024-07-15 13:59:35.856514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.564 [2024-07-15 13:59:35.856737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.564 [2024-07-15 13:59:35.856746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.564 [2024-07-15 13:59:35.856754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.564 [2024-07-15 13:59:35.860323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.564 [2024-07-15 13:59:35.869348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.564 [2024-07-15 13:59:35.869970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.564 [2024-07-15 13:59:35.869988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.564 [2024-07-15 13:59:35.869996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.564 [2024-07-15 13:59:35.870221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.564 [2024-07-15 13:59:35.870441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.564 [2024-07-15 13:59:35.870450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.564 [2024-07-15 13:59:35.870457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.564 [2024-07-15 13:59:35.874012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.564 [2024-07-15 13:59:35.883253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.564 [2024-07-15 13:59:35.883899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.564 [2024-07-15 13:59:35.883914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.564 [2024-07-15 13:59:35.883921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.564 [2024-07-15 13:59:35.884145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.564 [2024-07-15 13:59:35.884370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.564 [2024-07-15 13:59:35.884378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.564 [2024-07-15 13:59:35.884385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.564 [2024-07-15 13:59:35.887942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.564 [2024-07-15 13:59:35.897198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.564 [2024-07-15 13:59:35.897808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.564 [2024-07-15 13:59:35.897824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.564 [2024-07-15 13:59:35.897831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.564 [2024-07-15 13:59:35.898051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.564 [2024-07-15 13:59:35.898276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.564 [2024-07-15 13:59:35.898285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.564 [2024-07-15 13:59:35.898292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.564 [2024-07-15 13:59:35.901848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.564 [2024-07-15 13:59:35.911090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.564 [2024-07-15 13:59:35.911797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.564 [2024-07-15 13:59:35.911834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.564 [2024-07-15 13:59:35.911844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.564 [2024-07-15 13:59:35.912084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.564 [2024-07-15 13:59:35.912317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.564 [2024-07-15 13:59:35.912327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.564 [2024-07-15 13:59:35.912335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.564 [2024-07-15 13:59:35.915900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.564 [2024-07-15 13:59:35.924935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.564 [2024-07-15 13:59:35.925578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.564 [2024-07-15 13:59:35.925615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.564 [2024-07-15 13:59:35.925626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.564 [2024-07-15 13:59:35.925866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.564 [2024-07-15 13:59:35.926090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.564 [2024-07-15 13:59:35.926099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.564 [2024-07-15 13:59:35.926107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.564 [2024-07-15 13:59:35.929681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.564 [2024-07-15 13:59:35.938924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.564 [2024-07-15 13:59:35.939676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.564 [2024-07-15 13:59:35.939712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.564 [2024-07-15 13:59:35.939723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.564 [2024-07-15 13:59:35.939963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.564 [2024-07-15 13:59:35.940195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.564 [2024-07-15 13:59:35.940204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.564 [2024-07-15 13:59:35.940212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.564 [2024-07-15 13:59:35.943779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.564 [2024-07-15 13:59:35.952810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.564 [2024-07-15 13:59:35.953533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.564 [2024-07-15 13:59:35.953570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.564 [2024-07-15 13:59:35.953581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.564 [2024-07-15 13:59:35.953821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.564 [2024-07-15 13:59:35.954044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.564 [2024-07-15 13:59:35.954053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.564 [2024-07-15 13:59:35.954060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.564 [2024-07-15 13:59:35.957634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.564 [2024-07-15 13:59:35.966698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.564 [2024-07-15 13:59:35.967422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.564 [2024-07-15 13:59:35.967460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.564 [2024-07-15 13:59:35.967472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.564 [2024-07-15 13:59:35.967712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.564 [2024-07-15 13:59:35.967936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.564 [2024-07-15 13:59:35.967945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.564 [2024-07-15 13:59:35.967952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.564 [2024-07-15 13:59:35.971526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.564 [2024-07-15 13:59:35.980558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.564 [2024-07-15 13:59:35.981229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.564 [2024-07-15 13:59:35.981266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.564 [2024-07-15 13:59:35.981283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.564 [2024-07-15 13:59:35.981526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.564 [2024-07-15 13:59:35.981749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.564 [2024-07-15 13:59:35.981758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.564 [2024-07-15 13:59:35.981766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.564 [2024-07-15 13:59:35.985340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.564 [2024-07-15 13:59:35.994385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.564 [2024-07-15 13:59:35.995056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.564 [2024-07-15 13:59:35.995074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.564 [2024-07-15 13:59:35.995082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.564 [2024-07-15 13:59:35.995311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.564 [2024-07-15 13:59:35.995532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.564 [2024-07-15 13:59:35.995540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.564 [2024-07-15 13:59:35.995547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.564 [2024-07-15 13:59:35.999105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.564 [2024-07-15 13:59:36.008346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.564 [2024-07-15 13:59:36.008883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.564 [2024-07-15 13:59:36.008900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.564 [2024-07-15 13:59:36.008908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.565 [2024-07-15 13:59:36.009134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.565 [2024-07-15 13:59:36.009356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.565 [2024-07-15 13:59:36.009365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.565 [2024-07-15 13:59:36.009372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.565 [2024-07-15 13:59:36.012930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.565 [2024-07-15 13:59:36.022166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.565 [2024-07-15 13:59:36.022860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.565 [2024-07-15 13:59:36.022897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.565 [2024-07-15 13:59:36.022908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.565 [2024-07-15 13:59:36.023155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.565 [2024-07-15 13:59:36.023380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.565 [2024-07-15 13:59:36.023393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.565 [2024-07-15 13:59:36.023401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.565 [2024-07-15 13:59:36.026966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.565 [2024-07-15 13:59:36.035997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.565 [2024-07-15 13:59:36.036626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.565 [2024-07-15 13:59:36.036645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.565 [2024-07-15 13:59:36.036653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.565 [2024-07-15 13:59:36.036874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.565 [2024-07-15 13:59:36.037094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.565 [2024-07-15 13:59:36.037101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.565 [2024-07-15 13:59:36.037108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.565 [2024-07-15 13:59:36.040674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.565 [2024-07-15 13:59:36.049906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.565 [2024-07-15 13:59:36.050538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.565 [2024-07-15 13:59:36.050575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.565 [2024-07-15 13:59:36.050586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.565 [2024-07-15 13:59:36.050825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.565 [2024-07-15 13:59:36.051049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.565 [2024-07-15 13:59:36.051057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.565 [2024-07-15 13:59:36.051066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.565 [2024-07-15 13:59:36.054647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.565 [2024-07-15 13:59:36.063891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.565 [2024-07-15 13:59:36.064646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.565 [2024-07-15 13:59:36.064683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.565 [2024-07-15 13:59:36.064695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.565 [2024-07-15 13:59:36.064936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.565 [2024-07-15 13:59:36.065167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.565 [2024-07-15 13:59:36.065176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.565 [2024-07-15 13:59:36.065184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.565 [2024-07-15 13:59:36.068747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.565 [2024-07-15 13:59:36.077780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.565 [2024-07-15 13:59:36.078367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.565 [2024-07-15 13:59:36.078403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.565 [2024-07-15 13:59:36.078415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.565 [2024-07-15 13:59:36.078655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.565 [2024-07-15 13:59:36.078878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.565 [2024-07-15 13:59:36.078886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.565 [2024-07-15 13:59:36.078894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.565 [2024-07-15 13:59:36.082465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.827 [2024-07-15 13:59:36.091719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.827 [2024-07-15 13:59:36.092490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-07-15 13:59:36.092527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.827 [2024-07-15 13:59:36.092538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.827 [2024-07-15 13:59:36.092778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.827 [2024-07-15 13:59:36.093002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.827 [2024-07-15 13:59:36.093010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.827 [2024-07-15 13:59:36.093018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.827 [2024-07-15 13:59:36.096599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.827 [2024-07-15 13:59:36.105633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.827 [2024-07-15 13:59:36.106406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-07-15 13:59:36.106442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.827 [2024-07-15 13:59:36.106453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.827 [2024-07-15 13:59:36.106693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.827 [2024-07-15 13:59:36.106917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.827 [2024-07-15 13:59:36.106925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.827 [2024-07-15 13:59:36.106933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.827 [2024-07-15 13:59:36.110509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.827 [2024-07-15 13:59:36.119543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.827 [2024-07-15 13:59:36.120253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.827 [2024-07-15 13:59:36.120289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.827 [2024-07-15 13:59:36.120302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.827 [2024-07-15 13:59:36.120547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.827 [2024-07-15 13:59:36.120771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.827 [2024-07-15 13:59:36.120779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.827 [2024-07-15 13:59:36.120787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.827 [2024-07-15 13:59:36.124360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.828 [2024-07-15 13:59:36.133391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.828 [2024-07-15 13:59:36.133937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-07-15 13:59:36.133956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.828 [2024-07-15 13:59:36.133964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.828 [2024-07-15 13:59:36.134189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.828 [2024-07-15 13:59:36.134410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.828 [2024-07-15 13:59:36.134418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.828 [2024-07-15 13:59:36.134425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.828 [2024-07-15 13:59:36.138009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.828 [2024-07-15 13:59:36.147247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.828 [2024-07-15 13:59:36.147856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-07-15 13:59:36.147872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.828 [2024-07-15 13:59:36.147879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.828 [2024-07-15 13:59:36.148099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.828 [2024-07-15 13:59:36.148324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.828 [2024-07-15 13:59:36.148333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.828 [2024-07-15 13:59:36.148340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.828 [2024-07-15 13:59:36.151895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.828 [2024-07-15 13:59:36.161133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.828 [2024-07-15 13:59:36.161759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-07-15 13:59:36.161774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.828 [2024-07-15 13:59:36.161781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.828 [2024-07-15 13:59:36.162001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.828 [2024-07-15 13:59:36.162226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.828 [2024-07-15 13:59:36.162235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.828 [2024-07-15 13:59:36.162246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.828 [2024-07-15 13:59:36.165806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.828 [2024-07-15 13:59:36.175072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.828 [2024-07-15 13:59:36.175776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-07-15 13:59:36.175813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.828 [2024-07-15 13:59:36.175823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.828 [2024-07-15 13:59:36.176063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.828 [2024-07-15 13:59:36.176294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.828 [2024-07-15 13:59:36.176304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.828 [2024-07-15 13:59:36.176311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.828 [2024-07-15 13:59:36.179874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.828 [2024-07-15 13:59:36.188907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.828 [2024-07-15 13:59:36.189653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-07-15 13:59:36.189690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.828 [2024-07-15 13:59:36.189701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.828 [2024-07-15 13:59:36.189941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.828 [2024-07-15 13:59:36.190172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.828 [2024-07-15 13:59:36.190181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.828 [2024-07-15 13:59:36.190189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.828 [2024-07-15 13:59:36.193759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.828 [2024-07-15 13:59:36.202795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.828 [2024-07-15 13:59:36.203497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-07-15 13:59:36.203534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.828 [2024-07-15 13:59:36.203545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.828 [2024-07-15 13:59:36.203785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.828 [2024-07-15 13:59:36.204008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.828 [2024-07-15 13:59:36.204016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.828 [2024-07-15 13:59:36.204024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.828 [2024-07-15 13:59:36.207598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.828 [2024-07-15 13:59:36.216622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.828 [2024-07-15 13:59:36.217405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-07-15 13:59:36.217446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.828 [2024-07-15 13:59:36.217458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.828 [2024-07-15 13:59:36.217697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.828 [2024-07-15 13:59:36.217921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.828 [2024-07-15 13:59:36.217929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.828 [2024-07-15 13:59:36.217937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.828 [2024-07-15 13:59:36.221514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.828 [2024-07-15 13:59:36.230543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.828 [2024-07-15 13:59:36.231224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-07-15 13:59:36.231261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.828 [2024-07-15 13:59:36.231273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.828 [2024-07-15 13:59:36.231514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.828 [2024-07-15 13:59:36.231737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.828 [2024-07-15 13:59:36.231745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.828 [2024-07-15 13:59:36.231753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.828 [2024-07-15 13:59:36.235325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.828 [2024-07-15 13:59:36.244356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.828 [2024-07-15 13:59:36.244998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-07-15 13:59:36.245034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.828 [2024-07-15 13:59:36.245045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.828 [2024-07-15 13:59:36.245293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.828 [2024-07-15 13:59:36.245519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.828 [2024-07-15 13:59:36.245527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.828 [2024-07-15 13:59:36.245535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.828 [2024-07-15 13:59:36.249097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.828 [2024-07-15 13:59:36.258339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.828 [2024-07-15 13:59:36.259000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-07-15 13:59:36.259018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.828 [2024-07-15 13:59:36.259027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.828 [2024-07-15 13:59:36.259253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.828 [2024-07-15 13:59:36.259478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.828 [2024-07-15 13:59:36.259486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.828 [2024-07-15 13:59:36.259494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.828 [2024-07-15 13:59:36.263050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.828 [2024-07-15 13:59:36.272285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.828 [2024-07-15 13:59:36.272893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.828 [2024-07-15 13:59:36.272908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.828 [2024-07-15 13:59:36.272915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.828 [2024-07-15 13:59:36.273141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.829 [2024-07-15 13:59:36.273361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.829 [2024-07-15 13:59:36.273368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.829 [2024-07-15 13:59:36.273375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.829 [2024-07-15 13:59:36.276929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.829 [2024-07-15 13:59:36.286159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.829 [2024-07-15 13:59:36.286838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-07-15 13:59:36.286875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.829 [2024-07-15 13:59:36.286885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.829 [2024-07-15 13:59:36.287134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.829 [2024-07-15 13:59:36.287358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.829 [2024-07-15 13:59:36.287366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.829 [2024-07-15 13:59:36.287374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.829 [2024-07-15 13:59:36.290935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.829 [2024-07-15 13:59:36.299975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.829 [2024-07-15 13:59:36.300676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-07-15 13:59:36.300713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.829 [2024-07-15 13:59:36.300724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.829 [2024-07-15 13:59:36.300963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.829 [2024-07-15 13:59:36.301196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.829 [2024-07-15 13:59:36.301205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.829 [2024-07-15 13:59:36.301213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.829 [2024-07-15 13:59:36.304780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.829 [2024-07-15 13:59:36.313809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.829 [2024-07-15 13:59:36.314527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-07-15 13:59:36.314564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.829 [2024-07-15 13:59:36.314575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.829 [2024-07-15 13:59:36.314815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.829 [2024-07-15 13:59:36.315039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.829 [2024-07-15 13:59:36.315047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.829 [2024-07-15 13:59:36.315055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.829 [2024-07-15 13:59:36.318626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.829 [2024-07-15 13:59:36.327656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.829 [2024-07-15 13:59:36.328430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-07-15 13:59:36.328467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.829 [2024-07-15 13:59:36.328478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.829 [2024-07-15 13:59:36.328717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.829 [2024-07-15 13:59:36.328941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.829 [2024-07-15 13:59:36.328949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.829 [2024-07-15 13:59:36.328957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.829 [2024-07-15 13:59:36.332532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.829 [2024-07-15 13:59:36.341560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.829 [2024-07-15 13:59:36.342367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.829 [2024-07-15 13:59:36.342404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:09.829 [2024-07-15 13:59:36.342415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:09.829 [2024-07-15 13:59:36.342655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:09.829 [2024-07-15 13:59:36.342878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.829 [2024-07-15 13:59:36.342887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.829 [2024-07-15 13:59:36.342894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.829 [2024-07-15 13:59:36.346470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.091 [2024-07-15 13:59:36.355498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.091 [2024-07-15 13:59:36.356183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-15 13:59:36.356207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.091 [2024-07-15 13:59:36.356220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.091 [2024-07-15 13:59:36.356446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.091 [2024-07-15 13:59:36.356667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.091 [2024-07-15 13:59:36.356675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.091 [2024-07-15 13:59:36.356683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.091 [2024-07-15 13:59:36.360250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.091 [2024-07-15 13:59:36.369486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.091 [2024-07-15 13:59:36.370204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-15 13:59:36.370240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.091 [2024-07-15 13:59:36.370252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.091 [2024-07-15 13:59:36.370493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.091 [2024-07-15 13:59:36.370717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.091 [2024-07-15 13:59:36.370725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.091 [2024-07-15 13:59:36.370733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.091 [2024-07-15 13:59:36.374303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.091 [2024-07-15 13:59:36.383361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.091 [2024-07-15 13:59:36.384081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-15 13:59:36.384119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.091 [2024-07-15 13:59:36.384137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.091 [2024-07-15 13:59:36.384377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.091 [2024-07-15 13:59:36.384601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.091 [2024-07-15 13:59:36.384610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.091 [2024-07-15 13:59:36.384618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.091 [2024-07-15 13:59:36.388184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.091 [2024-07-15 13:59:36.397224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.091 [2024-07-15 13:59:36.397972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-15 13:59:36.398009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.091 [2024-07-15 13:59:36.398019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.091 [2024-07-15 13:59:36.398268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.091 [2024-07-15 13:59:36.398493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.091 [2024-07-15 13:59:36.398508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.091 [2024-07-15 13:59:36.398516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.091 [2024-07-15 13:59:36.402077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.091 [2024-07-15 13:59:36.411106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.091 [2024-07-15 13:59:36.411815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-15 13:59:36.411851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.091 [2024-07-15 13:59:36.411862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.091 [2024-07-15 13:59:36.412102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.091 [2024-07-15 13:59:36.412334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.091 [2024-07-15 13:59:36.412343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.091 [2024-07-15 13:59:36.412351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.091 [2024-07-15 13:59:36.415914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.091 [2024-07-15 13:59:36.424947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.091 [2024-07-15 13:59:36.425695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-15 13:59:36.425731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.091 [2024-07-15 13:59:36.425742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.091 [2024-07-15 13:59:36.425981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.091 [2024-07-15 13:59:36.426214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.091 [2024-07-15 13:59:36.426223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.091 [2024-07-15 13:59:36.426231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.091 [2024-07-15 13:59:36.429793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.091 [2024-07-15 13:59:36.438818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.091 [2024-07-15 13:59:36.439532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-15 13:59:36.439569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.091 [2024-07-15 13:59:36.439580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.091 [2024-07-15 13:59:36.439819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.091 [2024-07-15 13:59:36.440043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.091 [2024-07-15 13:59:36.440051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.091 [2024-07-15 13:59:36.440059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.091 [2024-07-15 13:59:36.443630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.091 [2024-07-15 13:59:36.452664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.091 [2024-07-15 13:59:36.453417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-15 13:59:36.453454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.091 [2024-07-15 13:59:36.453464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.091 [2024-07-15 13:59:36.453704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.091 [2024-07-15 13:59:36.453927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.091 [2024-07-15 13:59:36.453936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.091 [2024-07-15 13:59:36.453943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.091 [2024-07-15 13:59:36.457594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.091 [2024-07-15 13:59:36.466629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.091 [2024-07-15 13:59:36.467390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.091 [2024-07-15 13:59:36.467427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.091 [2024-07-15 13:59:36.467438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.091 [2024-07-15 13:59:36.467678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.092 [2024-07-15 13:59:36.467901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.092 [2024-07-15 13:59:36.467910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.092 [2024-07-15 13:59:36.467917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.092 [2024-07-15 13:59:36.471486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.092 [2024-07-15 13:59:36.480514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.092 [2024-07-15 13:59:36.481223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-15 13:59:36.481259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.092 [2024-07-15 13:59:36.481270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.092 [2024-07-15 13:59:36.481510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.092 [2024-07-15 13:59:36.481733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.092 [2024-07-15 13:59:36.481741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.092 [2024-07-15 13:59:36.481749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.092 [2024-07-15 13:59:36.485324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.092 [2024-07-15 13:59:36.494362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.092 [2024-07-15 13:59:36.495115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-15 13:59:36.495158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.092 [2024-07-15 13:59:36.495169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.092 [2024-07-15 13:59:36.495418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.092 [2024-07-15 13:59:36.495641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.092 [2024-07-15 13:59:36.495650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.092 [2024-07-15 13:59:36.495657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.092 [2024-07-15 13:59:36.499228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.092 [2024-07-15 13:59:36.508253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.092 [2024-07-15 13:59:36.508881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-15 13:59:36.508899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.092 [2024-07-15 13:59:36.508907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.092 [2024-07-15 13:59:36.509133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.092 [2024-07-15 13:59:36.509354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.092 [2024-07-15 13:59:36.509364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.092 [2024-07-15 13:59:36.509371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.092 [2024-07-15 13:59:36.512929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.092 [2024-07-15 13:59:36.522213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.092 [2024-07-15 13:59:36.522913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-15 13:59:36.522949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.092 [2024-07-15 13:59:36.522960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.092 [2024-07-15 13:59:36.523207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.092 [2024-07-15 13:59:36.523432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.092 [2024-07-15 13:59:36.523440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.092 [2024-07-15 13:59:36.523448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.092 [2024-07-15 13:59:36.527008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.092 [2024-07-15 13:59:36.536036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.092 [2024-07-15 13:59:36.536796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-15 13:59:36.536833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.092 [2024-07-15 13:59:36.536844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.092 [2024-07-15 13:59:36.537083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.092 [2024-07-15 13:59:36.537316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.092 [2024-07-15 13:59:36.537326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.092 [2024-07-15 13:59:36.537337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.092 [2024-07-15 13:59:36.540900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.092 [2024-07-15 13:59:36.549927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.092 [2024-07-15 13:59:36.550641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-15 13:59:36.550678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.092 [2024-07-15 13:59:36.550688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.092 [2024-07-15 13:59:36.550928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.092 [2024-07-15 13:59:36.551162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.092 [2024-07-15 13:59:36.551170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.092 [2024-07-15 13:59:36.551179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.092 [2024-07-15 13:59:36.554744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.092 [2024-07-15 13:59:36.563774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.092 [2024-07-15 13:59:36.564526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-15 13:59:36.564563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.092 [2024-07-15 13:59:36.564574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.092 [2024-07-15 13:59:36.564814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.092 [2024-07-15 13:59:36.565037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.092 [2024-07-15 13:59:36.565046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.092 [2024-07-15 13:59:36.565053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.092 [2024-07-15 13:59:36.568626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.092 [2024-07-15 13:59:36.577652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.092 [2024-07-15 13:59:36.578394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-15 13:59:36.578431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.092 [2024-07-15 13:59:36.578442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.092 [2024-07-15 13:59:36.578682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.092 [2024-07-15 13:59:36.578905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.092 [2024-07-15 13:59:36.578914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.092 [2024-07-15 13:59:36.578921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.092 [2024-07-15 13:59:36.582494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.092 [2024-07-15 13:59:36.591551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.092 [2024-07-15 13:59:36.592343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-15 13:59:36.592384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.092 [2024-07-15 13:59:36.592395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.092 [2024-07-15 13:59:36.592635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.092 [2024-07-15 13:59:36.592858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.092 [2024-07-15 13:59:36.592866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.092 [2024-07-15 13:59:36.592874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.092 [2024-07-15 13:59:36.596454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.092 [2024-07-15 13:59:36.605481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.092 [2024-07-15 13:59:36.606207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.092 [2024-07-15 13:59:36.606244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.092 [2024-07-15 13:59:36.606254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.092 [2024-07-15 13:59:36.606494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.092 [2024-07-15 13:59:36.606718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.092 [2024-07-15 13:59:36.606726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.092 [2024-07-15 13:59:36.606734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.092 [2024-07-15 13:59:36.610305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.354 [2024-07-15 13:59:36.619334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.354 [2024-07-15 13:59:36.620087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-07-15 13:59:36.620131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.354 [2024-07-15 13:59:36.620145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.354 [2024-07-15 13:59:36.620388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.354 [2024-07-15 13:59:36.620612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.354 [2024-07-15 13:59:36.620620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.354 [2024-07-15 13:59:36.620628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.354 [2024-07-15 13:59:36.624195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.354 [2024-07-15 13:59:36.633227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.354 [2024-07-15 13:59:36.633992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-07-15 13:59:36.634029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.354 [2024-07-15 13:59:36.634040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.354 [2024-07-15 13:59:36.634288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.354 [2024-07-15 13:59:36.634518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.354 [2024-07-15 13:59:36.634526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.354 [2024-07-15 13:59:36.634534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.354 [2024-07-15 13:59:36.638100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.354 [2024-07-15 13:59:36.647132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.354 [2024-07-15 13:59:36.647842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-07-15 13:59:36.647879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.354 [2024-07-15 13:59:36.647890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.354 [2024-07-15 13:59:36.648139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.354 [2024-07-15 13:59:36.648364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.354 [2024-07-15 13:59:36.648372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.354 [2024-07-15 13:59:36.648380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.354 [2024-07-15 13:59:36.651945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.354 [2024-07-15 13:59:36.660975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.354 [2024-07-15 13:59:36.661692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.354 [2024-07-15 13:59:36.661728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.354 [2024-07-15 13:59:36.661739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.354 [2024-07-15 13:59:36.661978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.354 [2024-07-15 13:59:36.662210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.354 [2024-07-15 13:59:36.662220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.354 [2024-07-15 13:59:36.662227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.354 [2024-07-15 13:59:36.665790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.354 [2024-07-15 13:59:36.674817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.355 [2024-07-15 13:59:36.675492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-07-15 13:59:36.675529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.355 [2024-07-15 13:59:36.675539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.355 [2024-07-15 13:59:36.675779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.355 [2024-07-15 13:59:36.676002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.355 [2024-07-15 13:59:36.676011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.355 [2024-07-15 13:59:36.676018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.355 [2024-07-15 13:59:36.679595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.355 [2024-07-15 13:59:36.688624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.355 [2024-07-15 13:59:36.689429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-07-15 13:59:36.689466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.355 [2024-07-15 13:59:36.689477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.355 [2024-07-15 13:59:36.689717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.355 [2024-07-15 13:59:36.689940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.355 [2024-07-15 13:59:36.689948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.355 [2024-07-15 13:59:36.689956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.355 [2024-07-15 13:59:36.693534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.355 [2024-07-15 13:59:36.702562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.355 [2024-07-15 13:59:36.703206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-07-15 13:59:36.703243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.355 [2024-07-15 13:59:36.703253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.355 [2024-07-15 13:59:36.703493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.355 [2024-07-15 13:59:36.703717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.355 [2024-07-15 13:59:36.703725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.355 [2024-07-15 13:59:36.703733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.355 [2024-07-15 13:59:36.707305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.355 [2024-07-15 13:59:36.716376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.355 [2024-07-15 13:59:36.717160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-07-15 13:59:36.717197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.355 [2024-07-15 13:59:36.717208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.355 [2024-07-15 13:59:36.717447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.355 [2024-07-15 13:59:36.717671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.355 [2024-07-15 13:59:36.717679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.355 [2024-07-15 13:59:36.717687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.355 [2024-07-15 13:59:36.721261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.355 [2024-07-15 13:59:36.730290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.355 [2024-07-15 13:59:36.731040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-07-15 13:59:36.731077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.355 [2024-07-15 13:59:36.731092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.355 [2024-07-15 13:59:36.731340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.355 [2024-07-15 13:59:36.731565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.355 [2024-07-15 13:59:36.731573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.355 [2024-07-15 13:59:36.731581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.355 [2024-07-15 13:59:36.735146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.355 [2024-07-15 13:59:36.744177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.355 [2024-07-15 13:59:36.744862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-07-15 13:59:36.744899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.355 [2024-07-15 13:59:36.744909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.355 [2024-07-15 13:59:36.745158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.355 [2024-07-15 13:59:36.745383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.355 [2024-07-15 13:59:36.745391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.355 [2024-07-15 13:59:36.745398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.355 [2024-07-15 13:59:36.748960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.355 [2024-07-15 13:59:36.757999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.355 [2024-07-15 13:59:36.758641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-07-15 13:59:36.758660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.355 [2024-07-15 13:59:36.758668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.355 [2024-07-15 13:59:36.758888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.355 [2024-07-15 13:59:36.759109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.355 [2024-07-15 13:59:36.759117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.355 [2024-07-15 13:59:36.759130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.355 [2024-07-15 13:59:36.762693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.355 [2024-07-15 13:59:36.771937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.355 [2024-07-15 13:59:36.772549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-07-15 13:59:36.772564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.355 [2024-07-15 13:59:36.772572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.355 [2024-07-15 13:59:36.772791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.355 [2024-07-15 13:59:36.773011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.355 [2024-07-15 13:59:36.773023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.355 [2024-07-15 13:59:36.773030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.355 [2024-07-15 13:59:36.776630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.355 [2024-07-15 13:59:36.785869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.355 [2024-07-15 13:59:36.786552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-07-15 13:59:36.786589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.355 [2024-07-15 13:59:36.786599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.355 [2024-07-15 13:59:36.786839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.355 [2024-07-15 13:59:36.787063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.355 [2024-07-15 13:59:36.787071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.355 [2024-07-15 13:59:36.787079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.355 [2024-07-15 13:59:36.790653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.355 [2024-07-15 13:59:36.799816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.355 [2024-07-15 13:59:36.800529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-07-15 13:59:36.800565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.355 [2024-07-15 13:59:36.800576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.355 [2024-07-15 13:59:36.800816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.355 [2024-07-15 13:59:36.801040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.355 [2024-07-15 13:59:36.801048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.355 [2024-07-15 13:59:36.801056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.355 [2024-07-15 13:59:36.804626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.355 [2024-07-15 13:59:36.813649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.355 [2024-07-15 13:59:36.814382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.355 [2024-07-15 13:59:36.814419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.355 [2024-07-15 13:59:36.814430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.355 [2024-07-15 13:59:36.814670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.355 [2024-07-15 13:59:36.814894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.355 [2024-07-15 13:59:36.814902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.355 [2024-07-15 13:59:36.814910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.356 [2024-07-15 13:59:36.818481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.356 [2024-07-15 13:59:36.827512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.356 [2024-07-15 13:59:36.828167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-07-15 13:59:36.828193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.356 [2024-07-15 13:59:36.828201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.356 [2024-07-15 13:59:36.828427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.356 [2024-07-15 13:59:36.828648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.356 [2024-07-15 13:59:36.828656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.356 [2024-07-15 13:59:36.828663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.356 [2024-07-15 13:59:36.832227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.356 [2024-07-15 13:59:36.841459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.356 [2024-07-15 13:59:36.842207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-07-15 13:59:36.842244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.356 [2024-07-15 13:59:36.842256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.356 [2024-07-15 13:59:36.842497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.356 [2024-07-15 13:59:36.842721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.356 [2024-07-15 13:59:36.842729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.356 [2024-07-15 13:59:36.842737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.356 [2024-07-15 13:59:36.846312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.356 [2024-07-15 13:59:36.855339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.356 [2024-07-15 13:59:36.856028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-07-15 13:59:36.856064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.356 [2024-07-15 13:59:36.856075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.356 [2024-07-15 13:59:36.856324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.356 [2024-07-15 13:59:36.856549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.356 [2024-07-15 13:59:36.856557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.356 [2024-07-15 13:59:36.856565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.356 [2024-07-15 13:59:36.860130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.356 [2024-07-15 13:59:36.869166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.356 [2024-07-15 13:59:36.869917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.356 [2024-07-15 13:59:36.869953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.356 [2024-07-15 13:59:36.869964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.356 [2024-07-15 13:59:36.870218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.356 [2024-07-15 13:59:36.870443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.356 [2024-07-15 13:59:36.870451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.356 [2024-07-15 13:59:36.870459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.356 [2024-07-15 13:59:36.874020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.618 [2024-07-15 13:59:36.883057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.618 [2024-07-15 13:59:36.883816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-07-15 13:59:36.883852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.618 [2024-07-15 13:59:36.883863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.618 [2024-07-15 13:59:36.884103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.618 [2024-07-15 13:59:36.884337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.618 [2024-07-15 13:59:36.884347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.618 [2024-07-15 13:59:36.884354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.618 [2024-07-15 13:59:36.887917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.618 [2024-07-15 13:59:36.896954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.618 [2024-07-15 13:59:36.897687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-07-15 13:59:36.897724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.618 [2024-07-15 13:59:36.897735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.618 [2024-07-15 13:59:36.897975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.618 [2024-07-15 13:59:36.898208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.618 [2024-07-15 13:59:36.898218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.618 [2024-07-15 13:59:36.898225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.618 [2024-07-15 13:59:36.901787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.618 [2024-07-15 13:59:36.910810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.618 [2024-07-15 13:59:36.911521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-07-15 13:59:36.911557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.618 [2024-07-15 13:59:36.911568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.618 [2024-07-15 13:59:36.911808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.618 [2024-07-15 13:59:36.912031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.618 [2024-07-15 13:59:36.912039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.618 [2024-07-15 13:59:36.912052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.618 [2024-07-15 13:59:36.915624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.618 [2024-07-15 13:59:36.924651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.618 [2024-07-15 13:59:36.925396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.618 [2024-07-15 13:59:36.925433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.618 [2024-07-15 13:59:36.925444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.618 [2024-07-15 13:59:36.925684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.618 [2024-07-15 13:59:36.925907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.618 [2024-07-15 13:59:36.925915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.618 [2024-07-15 13:59:36.925923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.618 [2024-07-15 13:59:36.929496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.619 [2024-07-15 13:59:36.938527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.619 [2024-07-15 13:59:36.939277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-07-15 13:59:36.939314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.619 [2024-07-15 13:59:36.939325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.619 [2024-07-15 13:59:36.939565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.619 [2024-07-15 13:59:36.939789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.619 [2024-07-15 13:59:36.939797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.619 [2024-07-15 13:59:36.939804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.619 [2024-07-15 13:59:36.943377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.619 [2024-07-15 13:59:36.952412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.619 [2024-07-15 13:59:36.953076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-07-15 13:59:36.953094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.619 [2024-07-15 13:59:36.953102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.619 [2024-07-15 13:59:36.953329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.619 [2024-07-15 13:59:36.953550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.619 [2024-07-15 13:59:36.953557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.619 [2024-07-15 13:59:36.953564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.619 [2024-07-15 13:59:36.957124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.619 [2024-07-15 13:59:36.966357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.619 [2024-07-15 13:59:36.966963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-07-15 13:59:36.966984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.619 [2024-07-15 13:59:36.966991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.619 [2024-07-15 13:59:36.967217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.619 [2024-07-15 13:59:36.967437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.619 [2024-07-15 13:59:36.967446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.619 [2024-07-15 13:59:36.967453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.619 [2024-07-15 13:59:36.971007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.619 [2024-07-15 13:59:36.980242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.619 [2024-07-15 13:59:36.980853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-07-15 13:59:36.980868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.619 [2024-07-15 13:59:36.980875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.619 [2024-07-15 13:59:36.981095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.619 [2024-07-15 13:59:36.981321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.619 [2024-07-15 13:59:36.981329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.619 [2024-07-15 13:59:36.981335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.619 [2024-07-15 13:59:36.984892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.619 [2024-07-15 13:59:36.994140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.619 [2024-07-15 13:59:36.994673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-07-15 13:59:36.994688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.619 [2024-07-15 13:59:36.994696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.619 [2024-07-15 13:59:36.994916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.619 [2024-07-15 13:59:36.995141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.619 [2024-07-15 13:59:36.995150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.619 [2024-07-15 13:59:36.995156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.619 [2024-07-15 13:59:36.998711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.619 [2024-07-15 13:59:37.007968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.619 [2024-07-15 13:59:37.008588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-07-15 13:59:37.008604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.619 [2024-07-15 13:59:37.008612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.619 [2024-07-15 13:59:37.008831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.619 [2024-07-15 13:59:37.009055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.619 [2024-07-15 13:59:37.009063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.619 [2024-07-15 13:59:37.009070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.619 [2024-07-15 13:59:37.012633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.619 [2024-07-15 13:59:37.021881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.619 [2024-07-15 13:59:37.022474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-07-15 13:59:37.022490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.619 [2024-07-15 13:59:37.022497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.619 [2024-07-15 13:59:37.022717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.619 [2024-07-15 13:59:37.022937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.619 [2024-07-15 13:59:37.022944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.619 [2024-07-15 13:59:37.022951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.619 [2024-07-15 13:59:37.026511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.619 [2024-07-15 13:59:37.035791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.619 [2024-07-15 13:59:37.036510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.619 [2024-07-15 13:59:37.036546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.619 [2024-07-15 13:59:37.036557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.619 [2024-07-15 13:59:37.036797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.620 [2024-07-15 13:59:37.037020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.620 [2024-07-15 13:59:37.037029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.620 [2024-07-15 13:59:37.037037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.620 [2024-07-15 13:59:37.040605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.620 [2024-07-15 13:59:37.049631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.620 [2024-07-15 13:59:37.050430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-07-15 13:59:37.050467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.620 [2024-07-15 13:59:37.050478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.620 [2024-07-15 13:59:37.050718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.620 [2024-07-15 13:59:37.050942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.620 [2024-07-15 13:59:37.050950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.620 [2024-07-15 13:59:37.050957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.620 [2024-07-15 13:59:37.054538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.620 [2024-07-15 13:59:37.063573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.620 [2024-07-15 13:59:37.064310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-07-15 13:59:37.064347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.620 [2024-07-15 13:59:37.064357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.620 [2024-07-15 13:59:37.064598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.620 [2024-07-15 13:59:37.064822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.620 [2024-07-15 13:59:37.064830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.620 [2024-07-15 13:59:37.064838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.620 [2024-07-15 13:59:37.068408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.620 [2024-07-15 13:59:37.077435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.620 [2024-07-15 13:59:37.078181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-07-15 13:59:37.078218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.620 [2024-07-15 13:59:37.078230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.620 [2024-07-15 13:59:37.078474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.620 [2024-07-15 13:59:37.078698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.620 [2024-07-15 13:59:37.078706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.620 [2024-07-15 13:59:37.078714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.620 [2024-07-15 13:59:37.082288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.620 [2024-07-15 13:59:37.091302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.620 [2024-07-15 13:59:37.092015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-07-15 13:59:37.092051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.620 [2024-07-15 13:59:37.092062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.620 [2024-07-15 13:59:37.092312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.620 [2024-07-15 13:59:37.092536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.620 [2024-07-15 13:59:37.092545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.620 [2024-07-15 13:59:37.092553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.620 [2024-07-15 13:59:37.096128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.620 [2024-07-15 13:59:37.105159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.620 [2024-07-15 13:59:37.105873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-07-15 13:59:37.105909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.620 [2024-07-15 13:59:37.105925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.620 [2024-07-15 13:59:37.106174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.620 [2024-07-15 13:59:37.106398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.620 [2024-07-15 13:59:37.106407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.620 [2024-07-15 13:59:37.106414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.620 [2024-07-15 13:59:37.109977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.620 [2024-07-15 13:59:37.119003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.620 [2024-07-15 13:59:37.119710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-07-15 13:59:37.119747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.620 [2024-07-15 13:59:37.119758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.620 [2024-07-15 13:59:37.119998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.620 [2024-07-15 13:59:37.120231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.620 [2024-07-15 13:59:37.120241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.620 [2024-07-15 13:59:37.120248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.620 [2024-07-15 13:59:37.123811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.620 [2024-07-15 13:59:37.132832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.620 [2024-07-15 13:59:37.133582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.620 [2024-07-15 13:59:37.133619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.620 [2024-07-15 13:59:37.133630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.620 [2024-07-15 13:59:37.133869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.620 [2024-07-15 13:59:37.134093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.620 [2024-07-15 13:59:37.134102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.620 [2024-07-15 13:59:37.134109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.620 [2024-07-15 13:59:37.137682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.883 [2024-07-15 13:59:37.146715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.883 [2024-07-15 13:59:37.147471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.883 [2024-07-15 13:59:37.147508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.883 [2024-07-15 13:59:37.147518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.883 [2024-07-15 13:59:37.147758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.883 [2024-07-15 13:59:37.147982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.883 [2024-07-15 13:59:37.147995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.883 [2024-07-15 13:59:37.148002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.883 [2024-07-15 13:59:37.151579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.883 [2024-07-15 13:59:37.160619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.883 [2024-07-15 13:59:37.161364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.883 [2024-07-15 13:59:37.161400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.883 [2024-07-15 13:59:37.161411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.883 [2024-07-15 13:59:37.161651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.883 [2024-07-15 13:59:37.161875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.883 [2024-07-15 13:59:37.161883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.883 [2024-07-15 13:59:37.161891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.883 [2024-07-15 13:59:37.165466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.883 [2024-07-15 13:59:37.174492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.883 [2024-07-15 13:59:37.175020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.883 [2024-07-15 13:59:37.175038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.883 [2024-07-15 13:59:37.175046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.883 [2024-07-15 13:59:37.175274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.883 [2024-07-15 13:59:37.175494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.883 [2024-07-15 13:59:37.175503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.883 [2024-07-15 13:59:37.175510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.883 [2024-07-15 13:59:37.179069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.883 [2024-07-15 13:59:37.188311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.883 [2024-07-15 13:59:37.188917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.883 [2024-07-15 13:59:37.188933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.883 [2024-07-15 13:59:37.188940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.883 [2024-07-15 13:59:37.189165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.883 [2024-07-15 13:59:37.189385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.883 [2024-07-15 13:59:37.189393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.883 [2024-07-15 13:59:37.189400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.883 [2024-07-15 13:59:37.192952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.883 [2024-07-15 13:59:37.202212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.883 [2024-07-15 13:59:37.202820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.883 [2024-07-15 13:59:37.202835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.883 [2024-07-15 13:59:37.202843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.883 [2024-07-15 13:59:37.203062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.883 [2024-07-15 13:59:37.203288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.883 [2024-07-15 13:59:37.203297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.883 [2024-07-15 13:59:37.203304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.883 [2024-07-15 13:59:37.206859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.883 [2024-07-15 13:59:37.216125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.883 [2024-07-15 13:59:37.216834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.883 [2024-07-15 13:59:37.216870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.883 [2024-07-15 13:59:37.216881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.883 [2024-07-15 13:59:37.217121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.883 [2024-07-15 13:59:37.217353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.883 [2024-07-15 13:59:37.217362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.883 [2024-07-15 13:59:37.217370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.883 [2024-07-15 13:59:37.220934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.883 [2024-07-15 13:59:37.229996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.883 [2024-07-15 13:59:37.230533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.883 [2024-07-15 13:59:37.230552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.883 [2024-07-15 13:59:37.230560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.883 [2024-07-15 13:59:37.230780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.884 [2024-07-15 13:59:37.231000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.884 [2024-07-15 13:59:37.231009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.884 [2024-07-15 13:59:37.231016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.884 [2024-07-15 13:59:37.234580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.884 [2024-07-15 13:59:37.243814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.884 [2024-07-15 13:59:37.244541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.884 [2024-07-15 13:59:37.244578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.884 [2024-07-15 13:59:37.244589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.884 [2024-07-15 13:59:37.244833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.884 [2024-07-15 13:59:37.245057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.884 [2024-07-15 13:59:37.245065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.884 [2024-07-15 13:59:37.245073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.884 [2024-07-15 13:59:37.248642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.884 [2024-07-15 13:59:37.257669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.884 [2024-07-15 13:59:37.258398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.884 [2024-07-15 13:59:37.258434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.884 [2024-07-15 13:59:37.258445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.884 [2024-07-15 13:59:37.258685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.884 [2024-07-15 13:59:37.258909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.884 [2024-07-15 13:59:37.258917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.884 [2024-07-15 13:59:37.258925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.884 [2024-07-15 13:59:37.262494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.884 [2024-07-15 13:59:37.271527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.884 [2024-07-15 13:59:37.272229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.884 [2024-07-15 13:59:37.272266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.884 [2024-07-15 13:59:37.272278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.884 [2024-07-15 13:59:37.272521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.884 [2024-07-15 13:59:37.272745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.884 [2024-07-15 13:59:37.272754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.884 [2024-07-15 13:59:37.272762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.884 [2024-07-15 13:59:37.276332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.884 [2024-07-15 13:59:37.285357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.884 [2024-07-15 13:59:37.285994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.884 [2024-07-15 13:59:37.286012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.884 [2024-07-15 13:59:37.286020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.884 [2024-07-15 13:59:37.286245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.884 [2024-07-15 13:59:37.286465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.884 [2024-07-15 13:59:37.286474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.884 [2024-07-15 13:59:37.286486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.884 [2024-07-15 13:59:37.290046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.884 [2024-07-15 13:59:37.299308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.884 [2024-07-15 13:59:37.299923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.884 [2024-07-15 13:59:37.299939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.884 [2024-07-15 13:59:37.299947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.884 [2024-07-15 13:59:37.300172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.884 [2024-07-15 13:59:37.300392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.884 [2024-07-15 13:59:37.300399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.884 [2024-07-15 13:59:37.300406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.884 [2024-07-15 13:59:37.303975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.884 [2024-07-15 13:59:37.313210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.884 [2024-07-15 13:59:37.313908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.884 [2024-07-15 13:59:37.313945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.884 [2024-07-15 13:59:37.313957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.884 [2024-07-15 13:59:37.314210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.884 [2024-07-15 13:59:37.314435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.884 [2024-07-15 13:59:37.314443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.884 [2024-07-15 13:59:37.314451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.884 [2024-07-15 13:59:37.318014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.884 [2024-07-15 13:59:37.327045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.884 [2024-07-15 13:59:37.327770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.884 [2024-07-15 13:59:37.327807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.884 [2024-07-15 13:59:37.327818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.884 [2024-07-15 13:59:37.328058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.884 [2024-07-15 13:59:37.328291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.884 [2024-07-15 13:59:37.328300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.884 [2024-07-15 13:59:37.328308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.884 [2024-07-15 13:59:37.331875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.884 [2024-07-15 13:59:37.340917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.884 [2024-07-15 13:59:37.341733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.884 [2024-07-15 13:59:37.341769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.884 [2024-07-15 13:59:37.341780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.884 [2024-07-15 13:59:37.342020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.884 [2024-07-15 13:59:37.342253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.884 [2024-07-15 13:59:37.342262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.884 [2024-07-15 13:59:37.342270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.884 [2024-07-15 13:59:37.345839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.884 [2024-07-15 13:59:37.354887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.884 [2024-07-15 13:59:37.355558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.884 [2024-07-15 13:59:37.355595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.884 [2024-07-15 13:59:37.355606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.884 [2024-07-15 13:59:37.355845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.884 [2024-07-15 13:59:37.356070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.884 [2024-07-15 13:59:37.356078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.884 [2024-07-15 13:59:37.356086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.885 [2024-07-15 13:59:37.359664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.885 [2024-07-15 13:59:37.368722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.885 [2024-07-15 13:59:37.369439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-07-15 13:59:37.369476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.885 [2024-07-15 13:59:37.369487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.885 [2024-07-15 13:59:37.369727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.885 [2024-07-15 13:59:37.369951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.885 [2024-07-15 13:59:37.369960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.885 [2024-07-15 13:59:37.369967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.885 [2024-07-15 13:59:37.373536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.885 [2024-07-15 13:59:37.382572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.885 [2024-07-15 13:59:37.383317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-07-15 13:59:37.383354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.885 [2024-07-15 13:59:37.383364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.885 [2024-07-15 13:59:37.383604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.885 [2024-07-15 13:59:37.383833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.885 [2024-07-15 13:59:37.383842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.885 [2024-07-15 13:59:37.383850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.885 [2024-07-15 13:59:37.387431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.885 [2024-07-15 13:59:37.396484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.885 [2024-07-15 13:59:37.397104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-07-15 13:59:37.397128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:10.885 [2024-07-15 13:59:37.397137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:10.885 [2024-07-15 13:59:37.397357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:10.885 [2024-07-15 13:59:37.397577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.885 [2024-07-15 13:59:37.397585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.885 [2024-07-15 13:59:37.397592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.885 [2024-07-15 13:59:37.401157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.147 [2024-07-15 13:59:37.410407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.147 [2024-07-15 13:59:37.411062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-15 13:59:37.411078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.147 [2024-07-15 13:59:37.411085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.147 [2024-07-15 13:59:37.411310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.147 [2024-07-15 13:59:37.411530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.147 [2024-07-15 13:59:37.411538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.147 [2024-07-15 13:59:37.411545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.147 [2024-07-15 13:59:37.415107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.147 [2024-07-15 13:59:37.424389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.147 [2024-07-15 13:59:37.425163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-15 13:59:37.425201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.147 [2024-07-15 13:59:37.425213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.147 [2024-07-15 13:59:37.425457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.147 [2024-07-15 13:59:37.425681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.147 [2024-07-15 13:59:37.425690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.147 [2024-07-15 13:59:37.425698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.147 [2024-07-15 13:59:37.429273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.147 [2024-07-15 13:59:37.438305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.147 [2024-07-15 13:59:37.439012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-15 13:59:37.439049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.147 [2024-07-15 13:59:37.439059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.147 [2024-07-15 13:59:37.439306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.147 [2024-07-15 13:59:37.439531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.147 [2024-07-15 13:59:37.439539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.147 [2024-07-15 13:59:37.439547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.147 [2024-07-15 13:59:37.443112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.147 [2024-07-15 13:59:37.452150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.147 [2024-07-15 13:59:37.452816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-15 13:59:37.452835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.147 [2024-07-15 13:59:37.452843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.147 [2024-07-15 13:59:37.453064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.147 [2024-07-15 13:59:37.453291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.147 [2024-07-15 13:59:37.453301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.147 [2024-07-15 13:59:37.453308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.147 [2024-07-15 13:59:37.456869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.147 [2024-07-15 13:59:37.466107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.147 [2024-07-15 13:59:37.466807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-15 13:59:37.466843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.147 [2024-07-15 13:59:37.466854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.147 [2024-07-15 13:59:37.467094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.147 [2024-07-15 13:59:37.467326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.147 [2024-07-15 13:59:37.467338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.147 [2024-07-15 13:59:37.467346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.147 [2024-07-15 13:59:37.470913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.147 [2024-07-15 13:59:37.479944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.147 [2024-07-15 13:59:37.480704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-15 13:59:37.480724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.147 [2024-07-15 13:59:37.480736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.147 [2024-07-15 13:59:37.480958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.147 [2024-07-15 13:59:37.481183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.147 [2024-07-15 13:59:37.481192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.147 [2024-07-15 13:59:37.481199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.147 [2024-07-15 13:59:37.484759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.147 [2024-07-15 13:59:37.493787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.147 [2024-07-15 13:59:37.494446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-15 13:59:37.494483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.147 [2024-07-15 13:59:37.494494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.147 [2024-07-15 13:59:37.494734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.147 [2024-07-15 13:59:37.494958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.147 [2024-07-15 13:59:37.494967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.147 [2024-07-15 13:59:37.494975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.147 [2024-07-15 13:59:37.498555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.147 [2024-07-15 13:59:37.507795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.147 [2024-07-15 13:59:37.508538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-15 13:59:37.508575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.147 [2024-07-15 13:59:37.508586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.147 [2024-07-15 13:59:37.508826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.147 [2024-07-15 13:59:37.509049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.148 [2024-07-15 13:59:37.509057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.148 [2024-07-15 13:59:37.509065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.148 [2024-07-15 13:59:37.512631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.148 [2024-07-15 13:59:37.521670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.148 [2024-07-15 13:59:37.522438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-15 13:59:37.522475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.148 [2024-07-15 13:59:37.522486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.148 [2024-07-15 13:59:37.522726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.148 [2024-07-15 13:59:37.522949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.148 [2024-07-15 13:59:37.522962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.148 [2024-07-15 13:59:37.522970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.148 [2024-07-15 13:59:37.526541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.148 [2024-07-15 13:59:37.535571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.148 [2024-07-15 13:59:37.536272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-15 13:59:37.536310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.148 [2024-07-15 13:59:37.536320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.148 [2024-07-15 13:59:37.536560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.148 [2024-07-15 13:59:37.536784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.148 [2024-07-15 13:59:37.536792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.148 [2024-07-15 13:59:37.536800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.148 [2024-07-15 13:59:37.540371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.148 [2024-07-15 13:59:37.549402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.148 [2024-07-15 13:59:37.549951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-15 13:59:37.549970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.148 [2024-07-15 13:59:37.549978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.148 [2024-07-15 13:59:37.550204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.148 [2024-07-15 13:59:37.550425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.148 [2024-07-15 13:59:37.550433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.148 [2024-07-15 13:59:37.550440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.148 [2024-07-15 13:59:37.553996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.148 [2024-07-15 13:59:37.563236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.148 [2024-07-15 13:59:37.563940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-15 13:59:37.563976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.148 [2024-07-15 13:59:37.563987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.148 [2024-07-15 13:59:37.564235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.148 [2024-07-15 13:59:37.564460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.148 [2024-07-15 13:59:37.564469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.148 [2024-07-15 13:59:37.564477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.148 [2024-07-15 13:59:37.568039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.148 [2024-07-15 13:59:37.577082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.148 [2024-07-15 13:59:37.577835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-15 13:59:37.577872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.148 [2024-07-15 13:59:37.577883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.148 [2024-07-15 13:59:37.578130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.148 [2024-07-15 13:59:37.578355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.148 [2024-07-15 13:59:37.578363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.148 [2024-07-15 13:59:37.578371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.148 [2024-07-15 13:59:37.581934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.148 [2024-07-15 13:59:37.590965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.148 [2024-07-15 13:59:37.591722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-15 13:59:37.591759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.148 [2024-07-15 13:59:37.591770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.148 [2024-07-15 13:59:37.592010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.148 [2024-07-15 13:59:37.592244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.148 [2024-07-15 13:59:37.592253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.148 [2024-07-15 13:59:37.592261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.148 [2024-07-15 13:59:37.595825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.148 [2024-07-15 13:59:37.604868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.148 [2024-07-15 13:59:37.605578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-15 13:59:37.605615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.148 [2024-07-15 13:59:37.605626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.148 [2024-07-15 13:59:37.605866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.148 [2024-07-15 13:59:37.606090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.148 [2024-07-15 13:59:37.606098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.148 [2024-07-15 13:59:37.606106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.148 [2024-07-15 13:59:37.609677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.148 [2024-07-15 13:59:37.618710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.148 [2024-07-15 13:59:37.619375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-15 13:59:37.619412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.148 [2024-07-15 13:59:37.619422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.148 [2024-07-15 13:59:37.619667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.148 [2024-07-15 13:59:37.619890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.148 [2024-07-15 13:59:37.619899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.148 [2024-07-15 13:59:37.619907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.148 [2024-07-15 13:59:37.623474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.148 [2024-07-15 13:59:37.632536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.148 [2024-07-15 13:59:37.633295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-15 13:59:37.633332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.148 [2024-07-15 13:59:37.633343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.148 [2024-07-15 13:59:37.633583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.148 [2024-07-15 13:59:37.633807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.148 [2024-07-15 13:59:37.633815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.148 [2024-07-15 13:59:37.633823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.148 [2024-07-15 13:59:37.637394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.148 [2024-07-15 13:59:37.646425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.148 [2024-07-15 13:59:37.647047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-15 13:59:37.647064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.149 [2024-07-15 13:59:37.647072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.149 [2024-07-15 13:59:37.647297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.149 [2024-07-15 13:59:37.647518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.149 [2024-07-15 13:59:37.647526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.149 [2024-07-15 13:59:37.647533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.149 [2024-07-15 13:59:37.651091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.149 [2024-07-15 13:59:37.660326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.149 [2024-07-15 13:59:37.660948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.149 [2024-07-15 13:59:37.660964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.149 [2024-07-15 13:59:37.660971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.149 [2024-07-15 13:59:37.661195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.149 [2024-07-15 13:59:37.661416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.149 [2024-07-15 13:59:37.661424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.149 [2024-07-15 13:59:37.661436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.149 [2024-07-15 13:59:37.664991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.411 [2024-07-15 13:59:37.674229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.411 [2024-07-15 13:59:37.674768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-07-15 13:59:37.674784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.411 [2024-07-15 13:59:37.674792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.411 [2024-07-15 13:59:37.675011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.411 [2024-07-15 13:59:37.675236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.411 [2024-07-15 13:59:37.675244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.411 [2024-07-15 13:59:37.675251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.411 [2024-07-15 13:59:37.678809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.411 [2024-07-15 13:59:37.688044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.411 [2024-07-15 13:59:37.688654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-07-15 13:59:37.688670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.411 [2024-07-15 13:59:37.688678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.411 [2024-07-15 13:59:37.688897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.411 [2024-07-15 13:59:37.689117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.411 [2024-07-15 13:59:37.689131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.411 [2024-07-15 13:59:37.689138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.411 [2024-07-15 13:59:37.692693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.411 [2024-07-15 13:59:37.701941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.411 [2024-07-15 13:59:37.702563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-07-15 13:59:37.702579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.411 [2024-07-15 13:59:37.702586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.411 [2024-07-15 13:59:37.702806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.411 [2024-07-15 13:59:37.703025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.411 [2024-07-15 13:59:37.703033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.411 [2024-07-15 13:59:37.703041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.411 [2024-07-15 13:59:37.706600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.411 [2024-07-15 13:59:37.715833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.411 [2024-07-15 13:59:37.716440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-07-15 13:59:37.716456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.411 [2024-07-15 13:59:37.716464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.411 [2024-07-15 13:59:37.716684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.411 [2024-07-15 13:59:37.716903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.411 [2024-07-15 13:59:37.716911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.411 [2024-07-15 13:59:37.716918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.411 [2024-07-15 13:59:37.720477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.411 [2024-07-15 13:59:37.729712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.411 [2024-07-15 13:59:37.730448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-07-15 13:59:37.730485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.411 [2024-07-15 13:59:37.730496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.411 [2024-07-15 13:59:37.730736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.411 [2024-07-15 13:59:37.730960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.411 [2024-07-15 13:59:37.730968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.411 [2024-07-15 13:59:37.730975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.411 [2024-07-15 13:59:37.734547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.411 [2024-07-15 13:59:37.743584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.411 [2024-07-15 13:59:37.744212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-07-15 13:59:37.744249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.411 [2024-07-15 13:59:37.744261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.411 [2024-07-15 13:59:37.744503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.411 [2024-07-15 13:59:37.744726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.411 [2024-07-15 13:59:37.744735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.411 [2024-07-15 13:59:37.744743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.411 [2024-07-15 13:59:37.748317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.411 [2024-07-15 13:59:37.757558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.411 [2024-07-15 13:59:37.758222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-07-15 13:59:37.758242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.411 [2024-07-15 13:59:37.758250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.411 [2024-07-15 13:59:37.758471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.411 [2024-07-15 13:59:37.758696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.411 [2024-07-15 13:59:37.758704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.411 [2024-07-15 13:59:37.758710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.411 [2024-07-15 13:59:37.762267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.411 [2024-07-15 13:59:37.771501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.411 [2024-07-15 13:59:37.772103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-07-15 13:59:37.772118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.411 [2024-07-15 13:59:37.772131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.411 [2024-07-15 13:59:37.772352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.411 [2024-07-15 13:59:37.772572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.411 [2024-07-15 13:59:37.772580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.411 [2024-07-15 13:59:37.772587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.411 [2024-07-15 13:59:37.776145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.411 [2024-07-15 13:59:37.785379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.411 [2024-07-15 13:59:37.785990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-07-15 13:59:37.786005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.411 [2024-07-15 13:59:37.786013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.411 [2024-07-15 13:59:37.786237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.411 [2024-07-15 13:59:37.786457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.411 [2024-07-15 13:59:37.786465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.411 [2024-07-15 13:59:37.786473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.411 [2024-07-15 13:59:37.790030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.411 [2024-07-15 13:59:37.799275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.411 [2024-07-15 13:59:37.799922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-07-15 13:59:37.799938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.412 [2024-07-15 13:59:37.799946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.412 [2024-07-15 13:59:37.800171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.412 [2024-07-15 13:59:37.800391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.412 [2024-07-15 13:59:37.800398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.412 [2024-07-15 13:59:37.800405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.412 [2024-07-15 13:59:37.803964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.412 [2024-07-15 13:59:37.813194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.412 [2024-07-15 13:59:37.813810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-07-15 13:59:37.813825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.412 [2024-07-15 13:59:37.813832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.412 [2024-07-15 13:59:37.814052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.412 [2024-07-15 13:59:37.814277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.412 [2024-07-15 13:59:37.814285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.412 [2024-07-15 13:59:37.814292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.412 [2024-07-15 13:59:37.817846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.412 [2024-07-15 13:59:37.827076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.412 [2024-07-15 13:59:37.827856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-07-15 13:59:37.827892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.412 [2024-07-15 13:59:37.827904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.412 [2024-07-15 13:59:37.828154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.412 [2024-07-15 13:59:37.828378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.412 [2024-07-15 13:59:37.828387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.412 [2024-07-15 13:59:37.828395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.412 [2024-07-15 13:59:37.831956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.412 [2024-07-15 13:59:37.841021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.412 [2024-07-15 13:59:37.841646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-07-15 13:59:37.841683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.412 [2024-07-15 13:59:37.841695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.412 [2024-07-15 13:59:37.841938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.412 [2024-07-15 13:59:37.842172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.412 [2024-07-15 13:59:37.842181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.412 [2024-07-15 13:59:37.842189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.412 [2024-07-15 13:59:37.845755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.412 [2024-07-15 13:59:37.854999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.412 [2024-07-15 13:59:37.855658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-07-15 13:59:37.855677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.412 [2024-07-15 13:59:37.855689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.412 [2024-07-15 13:59:37.855909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.412 [2024-07-15 13:59:37.856136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.412 [2024-07-15 13:59:37.856144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.412 [2024-07-15 13:59:37.856151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.412 [2024-07-15 13:59:37.859712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.412 [2024-07-15 13:59:37.868955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.412 [2024-07-15 13:59:37.869700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-07-15 13:59:37.869737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.412 [2024-07-15 13:59:37.869749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.412 [2024-07-15 13:59:37.869993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.412 [2024-07-15 13:59:37.870225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.412 [2024-07-15 13:59:37.870234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.412 [2024-07-15 13:59:37.870242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.412 [2024-07-15 13:59:37.873804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.412 [2024-07-15 13:59:37.882834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.412 [2024-07-15 13:59:37.883617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-07-15 13:59:37.883653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.412 [2024-07-15 13:59:37.883664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.412 [2024-07-15 13:59:37.883904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.412 [2024-07-15 13:59:37.884136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.412 [2024-07-15 13:59:37.884146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.412 [2024-07-15 13:59:37.884153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.412 [2024-07-15 13:59:37.887716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.412 [2024-07-15 13:59:37.896761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.412 [2024-07-15 13:59:37.897504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-07-15 13:59:37.897541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.412 [2024-07-15 13:59:37.897553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.412 [2024-07-15 13:59:37.897794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.412 [2024-07-15 13:59:37.898018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.412 [2024-07-15 13:59:37.898034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.412 [2024-07-15 13:59:37.898042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.412 [2024-07-15 13:59:37.901613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.412 [2024-07-15 13:59:37.910649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.412 [2024-07-15 13:59:37.911439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-07-15 13:59:37.911477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.412 [2024-07-15 13:59:37.911487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.412 [2024-07-15 13:59:37.911727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.412 [2024-07-15 13:59:37.911951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.412 [2024-07-15 13:59:37.911959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.412 [2024-07-15 13:59:37.911967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.412 [2024-07-15 13:59:37.915539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.412 [2024-07-15 13:59:37.924570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.412 [2024-07-15 13:59:37.925232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-07-15 13:59:37.925269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.412 [2024-07-15 13:59:37.925280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.412 [2024-07-15 13:59:37.925524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.412 [2024-07-15 13:59:37.925747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.412 [2024-07-15 13:59:37.925756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.412 [2024-07-15 13:59:37.925764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.412 [2024-07-15 13:59:37.929336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.677 [2024-07-15 13:59:37.938578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.677 [2024-07-15 13:59:37.939337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-07-15 13:59:37.939374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.677 [2024-07-15 13:59:37.939386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.677 [2024-07-15 13:59:37.939626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.677 [2024-07-15 13:59:37.939850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.677 [2024-07-15 13:59:37.939858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.677 [2024-07-15 13:59:37.939866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.677 [2024-07-15 13:59:37.943442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.677 [2024-07-15 13:59:37.952481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.677 [2024-07-15 13:59:37.953199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-07-15 13:59:37.953235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.677 [2024-07-15 13:59:37.953247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.677 [2024-07-15 13:59:37.953490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.678 [2024-07-15 13:59:37.953713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.678 [2024-07-15 13:59:37.953721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.678 [2024-07-15 13:59:37.953729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.678 [2024-07-15 13:59:37.957298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.678 [2024-07-15 13:59:37.966327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.678 [2024-07-15 13:59:37.967040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-07-15 13:59:37.967077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.678 [2024-07-15 13:59:37.967089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.678 [2024-07-15 13:59:37.967340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.678 [2024-07-15 13:59:37.967564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.678 [2024-07-15 13:59:37.967572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.678 [2024-07-15 13:59:37.967580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.678 [2024-07-15 13:59:37.971146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.678 [2024-07-15 13:59:37.980177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.678 [2024-07-15 13:59:37.980763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-07-15 13:59:37.980800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.678 [2024-07-15 13:59:37.980811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.678 [2024-07-15 13:59:37.981051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.678 [2024-07-15 13:59:37.981290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.678 [2024-07-15 13:59:37.981301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.678 [2024-07-15 13:59:37.981309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.678 [2024-07-15 13:59:37.984873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.678 [2024-07-15 13:59:37.994126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.678 [2024-07-15 13:59:37.994834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-07-15 13:59:37.994870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.678 [2024-07-15 13:59:37.994881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.678 [2024-07-15 13:59:37.995142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.678 [2024-07-15 13:59:37.995367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.678 [2024-07-15 13:59:37.995376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.678 [2024-07-15 13:59:37.995383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.678 [2024-07-15 13:59:37.998948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.678 [2024-07-15 13:59:38.007978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.678 [2024-07-15 13:59:38.008606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-07-15 13:59:38.008643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.678 [2024-07-15 13:59:38.008653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.678 [2024-07-15 13:59:38.008893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.678 [2024-07-15 13:59:38.009117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.678 [2024-07-15 13:59:38.009135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.678 [2024-07-15 13:59:38.009143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.678 [2024-07-15 13:59:38.012705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.678 [2024-07-15 13:59:38.021942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.678 [2024-07-15 13:59:38.022570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-07-15 13:59:38.022588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.678 [2024-07-15 13:59:38.022596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.678 [2024-07-15 13:59:38.022817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.678 [2024-07-15 13:59:38.023036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.678 [2024-07-15 13:59:38.023044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.678 [2024-07-15 13:59:38.023051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.678 [2024-07-15 13:59:38.026615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.678 [2024-07-15 13:59:38.035850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.678 [2024-07-15 13:59:38.036531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-07-15 13:59:38.036568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.678 [2024-07-15 13:59:38.036580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.678 [2024-07-15 13:59:38.036824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.678 [2024-07-15 13:59:38.037047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.678 [2024-07-15 13:59:38.037055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.678 [2024-07-15 13:59:38.037067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.678 [2024-07-15 13:59:38.040664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.678 [2024-07-15 13:59:38.049704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.678 [2024-07-15 13:59:38.050475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-07-15 13:59:38.050512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.678 [2024-07-15 13:59:38.050523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.678 [2024-07-15 13:59:38.050762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.678 [2024-07-15 13:59:38.050986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.678 [2024-07-15 13:59:38.050994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.678 [2024-07-15 13:59:38.051002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.678 [2024-07-15 13:59:38.054570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.678 [2024-07-15 13:59:38.063600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.678 [2024-07-15 13:59:38.064370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-07-15 13:59:38.064408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.678 [2024-07-15 13:59:38.064418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.678 [2024-07-15 13:59:38.064658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.678 [2024-07-15 13:59:38.064882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.678 [2024-07-15 13:59:38.064891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.678 [2024-07-15 13:59:38.064898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.678 [2024-07-15 13:59:38.068468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.678 [2024-07-15 13:59:38.077500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.678 [2024-07-15 13:59:38.078217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-07-15 13:59:38.078253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.678 [2024-07-15 13:59:38.078265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.678 [2024-07-15 13:59:38.078508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.678 [2024-07-15 13:59:38.078732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.678 [2024-07-15 13:59:38.078741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.678 [2024-07-15 13:59:38.078750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.678 [2024-07-15 13:59:38.082325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.678 [2024-07-15 13:59:38.091338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.678 [2024-07-15 13:59:38.091967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-07-15 13:59:38.092004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.678 [2024-07-15 13:59:38.092014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.678 [2024-07-15 13:59:38.092263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.678 [2024-07-15 13:59:38.092487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.678 [2024-07-15 13:59:38.092496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.678 [2024-07-15 13:59:38.092503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.678 [2024-07-15 13:59:38.096075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.678 [2024-07-15 13:59:38.105321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.678 [2024-07-15 13:59:38.105935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-07-15 13:59:38.105953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.678 [2024-07-15 13:59:38.105961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.679 [2024-07-15 13:59:38.106188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.679 [2024-07-15 13:59:38.106409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.679 [2024-07-15 13:59:38.106416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.679 [2024-07-15 13:59:38.106423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.679 [2024-07-15 13:59:38.109979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.679 [2024-07-15 13:59:38.119210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.679 [2024-07-15 13:59:38.119892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-07-15 13:59:38.119929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.679 [2024-07-15 13:59:38.119940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.679 [2024-07-15 13:59:38.120188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.679 [2024-07-15 13:59:38.120413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.679 [2024-07-15 13:59:38.120421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.679 [2024-07-15 13:59:38.120429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.679 [2024-07-15 13:59:38.123991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.679 [2024-07-15 13:59:38.133025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.679 [2024-07-15 13:59:38.133658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-07-15 13:59:38.133676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.679 [2024-07-15 13:59:38.133684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.679 [2024-07-15 13:59:38.133905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.679 [2024-07-15 13:59:38.134136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.679 [2024-07-15 13:59:38.134144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.679 [2024-07-15 13:59:38.134151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.679 [2024-07-15 13:59:38.137709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.679 [2024-07-15 13:59:38.146940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.679 [2024-07-15 13:59:38.147609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-07-15 13:59:38.147646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.679 [2024-07-15 13:59:38.147656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.679 [2024-07-15 13:59:38.147896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.679 [2024-07-15 13:59:38.148120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.679 [2024-07-15 13:59:38.148137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.679 [2024-07-15 13:59:38.148145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.679 [2024-07-15 13:59:38.151709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.679 [2024-07-15 13:59:38.160941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.679 [2024-07-15 13:59:38.161633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-07-15 13:59:38.161670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.679 [2024-07-15 13:59:38.161680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.679 [2024-07-15 13:59:38.161920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.679 [2024-07-15 13:59:38.162152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.679 [2024-07-15 13:59:38.162161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.679 [2024-07-15 13:59:38.162168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.679 [2024-07-15 13:59:38.165729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.679 [2024-07-15 13:59:38.174758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.679 [2024-07-15 13:59:38.175515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-07-15 13:59:38.175552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.679 [2024-07-15 13:59:38.175563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.679 [2024-07-15 13:59:38.175803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.679 [2024-07-15 13:59:38.176026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.679 [2024-07-15 13:59:38.176035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.679 [2024-07-15 13:59:38.176042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.679 [2024-07-15 13:59:38.179619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.679 [2024-07-15 13:59:38.188645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.679 [2024-07-15 13:59:38.189332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-07-15 13:59:38.189369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.679 [2024-07-15 13:59:38.189380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.679 [2024-07-15 13:59:38.189620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:11.679 [2024-07-15 13:59:38.189843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.679 [2024-07-15 13:59:38.189852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.679 [2024-07-15 13:59:38.189859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.679 [2024-07-15 13:59:38.193434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.999 [2024-07-15 13:59:38.202478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.999 [2024-07-15 13:59:38.203186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-15 13:59:38.203222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:11.999 [2024-07-15 13:59:38.203234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:11.999 [2024-07-15 13:59:38.203478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.000 [2024-07-15 13:59:38.203702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.000 [2024-07-15 13:59:38.203710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.000 [2024-07-15 13:59:38.203718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.000 [2024-07-15 13:59:38.207293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.000 [2024-07-15 13:59:38.216327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.000 [2024-07-15 13:59:38.217077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-15 13:59:38.217113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.000 [2024-07-15 13:59:38.217135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.000 [2024-07-15 13:59:38.217376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.000 [2024-07-15 13:59:38.217600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.000 [2024-07-15 13:59:38.217608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.000 [2024-07-15 13:59:38.217616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.000 [2024-07-15 13:59:38.221181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.000 [2024-07-15 13:59:38.230209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.000 [2024-07-15 13:59:38.230905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-15 13:59:38.230942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.000 [2024-07-15 13:59:38.230957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.000 [2024-07-15 13:59:38.231206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.000 [2024-07-15 13:59:38.231431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.000 [2024-07-15 13:59:38.231439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.000 [2024-07-15 13:59:38.231447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.000 [2024-07-15 13:59:38.235009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.000 [2024-07-15 13:59:38.244040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.000 [2024-07-15 13:59:38.244797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-15 13:59:38.244834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.000 [2024-07-15 13:59:38.244845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.000 [2024-07-15 13:59:38.245084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.000 [2024-07-15 13:59:38.245317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.000 [2024-07-15 13:59:38.245326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.000 [2024-07-15 13:59:38.245334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.000 [2024-07-15 13:59:38.248927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.000 [2024-07-15 13:59:38.257965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.000 [2024-07-15 13:59:38.258659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-15 13:59:38.258696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.000 [2024-07-15 13:59:38.258707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.000 [2024-07-15 13:59:38.258947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.000 [2024-07-15 13:59:38.259180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.000 [2024-07-15 13:59:38.259189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.000 [2024-07-15 13:59:38.259197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.000 [2024-07-15 13:59:38.262762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.000 [2024-07-15 13:59:38.271790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.000 [2024-07-15 13:59:38.272520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-15 13:59:38.272557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.000 [2024-07-15 13:59:38.272568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.000 [2024-07-15 13:59:38.272808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.000 [2024-07-15 13:59:38.273031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.000 [2024-07-15 13:59:38.273044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.000 [2024-07-15 13:59:38.273052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.000 [2024-07-15 13:59:38.276626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.000 [2024-07-15 13:59:38.285657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.000 [2024-07-15 13:59:38.286390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-15 13:59:38.286427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.000 [2024-07-15 13:59:38.286438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.000 [2024-07-15 13:59:38.286677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.000 [2024-07-15 13:59:38.286901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.000 [2024-07-15 13:59:38.286909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.000 [2024-07-15 13:59:38.286917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.000 [2024-07-15 13:59:38.290489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.000 [2024-07-15 13:59:38.299530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.000 [2024-07-15 13:59:38.300224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-15 13:59:38.300261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.000 [2024-07-15 13:59:38.300273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.000 [2024-07-15 13:59:38.300516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.000 [2024-07-15 13:59:38.300739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.000 [2024-07-15 13:59:38.300748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.000 [2024-07-15 13:59:38.300756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.000 [2024-07-15 13:59:38.304328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.000 [2024-07-15 13:59:38.313357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.000 [2024-07-15 13:59:38.314071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-15 13:59:38.314108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.000 [2024-07-15 13:59:38.314120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.000 [2024-07-15 13:59:38.314369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.000 [2024-07-15 13:59:38.314593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.000 [2024-07-15 13:59:38.314601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.000 [2024-07-15 13:59:38.314608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.000 [2024-07-15 13:59:38.318174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.000 [2024-07-15 13:59:38.327202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.000 [2024-07-15 13:59:38.327904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-15 13:59:38.327940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.000 [2024-07-15 13:59:38.327951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.000 [2024-07-15 13:59:38.328199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.000 [2024-07-15 13:59:38.328423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.000 [2024-07-15 13:59:38.328432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.000 [2024-07-15 13:59:38.328439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.000 [2024-07-15 13:59:38.332002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.000 [2024-07-15 13:59:38.341035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.000 [2024-07-15 13:59:38.341746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-15 13:59:38.341783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.000 [2024-07-15 13:59:38.341794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.000 [2024-07-15 13:59:38.342033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.000 [2024-07-15 13:59:38.342266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.000 [2024-07-15 13:59:38.342275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.000 [2024-07-15 13:59:38.342283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.000 [2024-07-15 13:59:38.345848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.000 [2024-07-15 13:59:38.354875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.000 [2024-07-15 13:59:38.355503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-15 13:59:38.355540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.001 [2024-07-15 13:59:38.355550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.001 [2024-07-15 13:59:38.355790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.001 [2024-07-15 13:59:38.356014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.001 [2024-07-15 13:59:38.356022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.001 [2024-07-15 13:59:38.356029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.001 [2024-07-15 13:59:38.359602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.001 [2024-07-15 13:59:38.368841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.001 [2024-07-15 13:59:38.369553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-15 13:59:38.369590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.001 [2024-07-15 13:59:38.369601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.001 [2024-07-15 13:59:38.369845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.001 [2024-07-15 13:59:38.370069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.001 [2024-07-15 13:59:38.370077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.001 [2024-07-15 13:59:38.370085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.001 [2024-07-15 13:59:38.373655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.001 [2024-07-15 13:59:38.382687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.001 [2024-07-15 13:59:38.383414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-15 13:59:38.383451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.001 [2024-07-15 13:59:38.383461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.001 [2024-07-15 13:59:38.383701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.001 [2024-07-15 13:59:38.383925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.001 [2024-07-15 13:59:38.383933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.001 [2024-07-15 13:59:38.383941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.001 [2024-07-15 13:59:38.387514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.001 [2024-07-15 13:59:38.396536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.001 [2024-07-15 13:59:38.397217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-15 13:59:38.397253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.001 [2024-07-15 13:59:38.397265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.001 [2024-07-15 13:59:38.397506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.001 [2024-07-15 13:59:38.397730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.001 [2024-07-15 13:59:38.397738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.001 [2024-07-15 13:59:38.397746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.001 [2024-07-15 13:59:38.401328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.001 [2024-07-15 13:59:38.410366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.001 [2024-07-15 13:59:38.411031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-15 13:59:38.411049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.001 [2024-07-15 13:59:38.411057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.001 [2024-07-15 13:59:38.411283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.001 [2024-07-15 13:59:38.411504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.001 [2024-07-15 13:59:38.411512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.001 [2024-07-15 13:59:38.411524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.001 [2024-07-15 13:59:38.415082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.001 [2024-07-15 13:59:38.424315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.001 [2024-07-15 13:59:38.424963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-15 13:59:38.424979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.001 [2024-07-15 13:59:38.424986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.001 [2024-07-15 13:59:38.425212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.001 [2024-07-15 13:59:38.425432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.001 [2024-07-15 13:59:38.425439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.001 [2024-07-15 13:59:38.425446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.001 [2024-07-15 13:59:38.429002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.001 [2024-07-15 13:59:38.438233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.001 [2024-07-15 13:59:38.438837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-15 13:59:38.438852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.001 [2024-07-15 13:59:38.438859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.001 [2024-07-15 13:59:38.439079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.001 [2024-07-15 13:59:38.439304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.001 [2024-07-15 13:59:38.439313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.001 [2024-07-15 13:59:38.439320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.001 [2024-07-15 13:59:38.442905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.001 [2024-07-15 13:59:38.452141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.001 [2024-07-15 13:59:38.452701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-15 13:59:38.452737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.001 [2024-07-15 13:59:38.452748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.001 [2024-07-15 13:59:38.452988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.001 [2024-07-15 13:59:38.453220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.001 [2024-07-15 13:59:38.453229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.001 [2024-07-15 13:59:38.453238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.001 [2024-07-15 13:59:38.456830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.001 [2024-07-15 13:59:38.466078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.001 [2024-07-15 13:59:38.466814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-15 13:59:38.466851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.001 [2024-07-15 13:59:38.466862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.001 [2024-07-15 13:59:38.467102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.001 [2024-07-15 13:59:38.467334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.001 [2024-07-15 13:59:38.467343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.001 [2024-07-15 13:59:38.467351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.001 [2024-07-15 13:59:38.470912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.001 [2024-07-15 13:59:38.479941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.001 [2024-07-15 13:59:38.480585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-15 13:59:38.480603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.001 [2024-07-15 13:59:38.480611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.001 [2024-07-15 13:59:38.480832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.001 [2024-07-15 13:59:38.481052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.001 [2024-07-15 13:59:38.481060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.001 [2024-07-15 13:59:38.481067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.001 [2024-07-15 13:59:38.484635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.001 [2024-07-15 13:59:38.493869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.001 [2024-07-15 13:59:38.494538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-15 13:59:38.494575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.001 [2024-07-15 13:59:38.494585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.001 [2024-07-15 13:59:38.494825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.001 [2024-07-15 13:59:38.495049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.001 [2024-07-15 13:59:38.495057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.001 [2024-07-15 13:59:38.495064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.001 [2024-07-15 13:59:38.498644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.001 [2024-07-15 13:59:38.507717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.001 [2024-07-15 13:59:38.508461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-15 13:59:38.508498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.002 [2024-07-15 13:59:38.508508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.002 [2024-07-15 13:59:38.508748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.002 [2024-07-15 13:59:38.508975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.002 [2024-07-15 13:59:38.508984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.002 [2024-07-15 13:59:38.508992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.002 [2024-07-15 13:59:38.512564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.264 [2024-07-15 13:59:38.521596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.264 [2024-07-15 13:59:38.522379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.264 [2024-07-15 13:59:38.522416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.264 [2024-07-15 13:59:38.522426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.264 [2024-07-15 13:59:38.522666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.264 [2024-07-15 13:59:38.522890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.264 [2024-07-15 13:59:38.522899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.264 [2024-07-15 13:59:38.522906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.264 [2024-07-15 13:59:38.526481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.264 [2024-07-15 13:59:38.535519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.264 [2024-07-15 13:59:38.536225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.264 [2024-07-15 13:59:38.536263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.264 [2024-07-15 13:59:38.536275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.264 [2024-07-15 13:59:38.536518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.264 [2024-07-15 13:59:38.536742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.264 [2024-07-15 13:59:38.536751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.264 [2024-07-15 13:59:38.536758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.264 [2024-07-15 13:59:38.540327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.264 [2024-07-15 13:59:38.549361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.264 [2024-07-15 13:59:38.550071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.264 [2024-07-15 13:59:38.550107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.264 [2024-07-15 13:59:38.550119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.264 [2024-07-15 13:59:38.550370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.264 [2024-07-15 13:59:38.550594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.264 [2024-07-15 13:59:38.550603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.264 [2024-07-15 13:59:38.550610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.264 [2024-07-15 13:59:38.554182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.264 [2024-07-15 13:59:38.563214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.264 [2024-07-15 13:59:38.563885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.264 [2024-07-15 13:59:38.563922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.264 [2024-07-15 13:59:38.563933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.264 [2024-07-15 13:59:38.564180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.264 [2024-07-15 13:59:38.564404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.264 [2024-07-15 13:59:38.564413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.264 [2024-07-15 13:59:38.564421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.264 [2024-07-15 13:59:38.567982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.264 [2024-07-15 13:59:38.577222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.264 [2024-07-15 13:59:38.577895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.264 [2024-07-15 13:59:38.577931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.264 [2024-07-15 13:59:38.577942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.264 [2024-07-15 13:59:38.578189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.264 [2024-07-15 13:59:38.578414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.264 [2024-07-15 13:59:38.578422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.264 [2024-07-15 13:59:38.578430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.264 [2024-07-15 13:59:38.581993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.264 [2024-07-15 13:59:38.591029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.264 [2024-07-15 13:59:38.591740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.264 [2024-07-15 13:59:38.591777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.264 [2024-07-15 13:59:38.591787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.264 [2024-07-15 13:59:38.592027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.264 [2024-07-15 13:59:38.592258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.264 [2024-07-15 13:59:38.592267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.264 [2024-07-15 13:59:38.592275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.264 [2024-07-15 13:59:38.595839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.264 [2024-07-15 13:59:38.604878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.264 [2024-07-15 13:59:38.605565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.264 [2024-07-15 13:59:38.605602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.264 [2024-07-15 13:59:38.605621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.264 [2024-07-15 13:59:38.605861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.264 [2024-07-15 13:59:38.606084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.264 [2024-07-15 13:59:38.606093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.264 [2024-07-15 13:59:38.606101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.264 [2024-07-15 13:59:38.609670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.264 [2024-07-15 13:59:38.618711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.264 [2024-07-15 13:59:38.619429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.264 [2024-07-15 13:59:38.619466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.264 [2024-07-15 13:59:38.619477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.264 [2024-07-15 13:59:38.619717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.264 [2024-07-15 13:59:38.619940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.264 [2024-07-15 13:59:38.619949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.264 [2024-07-15 13:59:38.619956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.264 [2024-07-15 13:59:38.623526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.264 [2024-07-15 13:59:38.632555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.264 [2024-07-15 13:59:38.633104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.264 [2024-07-15 13:59:38.633128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.264 [2024-07-15 13:59:38.633137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.264 [2024-07-15 13:59:38.633358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.264 [2024-07-15 13:59:38.633577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.264 [2024-07-15 13:59:38.633585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.264 [2024-07-15 13:59:38.633592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.264 [2024-07-15 13:59:38.637151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.264 [2024-07-15 13:59:38.646381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.264 [2024-07-15 13:59:38.646992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.264 [2024-07-15 13:59:38.647007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.264 [2024-07-15 13:59:38.647015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.264 [2024-07-15 13:59:38.647240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.264 [2024-07-15 13:59:38.647460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.264 [2024-07-15 13:59:38.647472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.264 [2024-07-15 13:59:38.647479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.265 [2024-07-15 13:59:38.651034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.265 [2024-07-15 13:59:38.660266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.265 [2024-07-15 13:59:38.660916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.265 [2024-07-15 13:59:38.660931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.265 [2024-07-15 13:59:38.660939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.265 [2024-07-15 13:59:38.661165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.265 [2024-07-15 13:59:38.661386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.265 [2024-07-15 13:59:38.661393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.265 [2024-07-15 13:59:38.661401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.265 [2024-07-15 13:59:38.664982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.265 [2024-07-15 13:59:38.674226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.265 [2024-07-15 13:59:38.675002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.265 [2024-07-15 13:59:38.675039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.265 [2024-07-15 13:59:38.675049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.265 [2024-07-15 13:59:38.675298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.265 [2024-07-15 13:59:38.675523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.265 [2024-07-15 13:59:38.675531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.265 [2024-07-15 13:59:38.675539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.265 [2024-07-15 13:59:38.679102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.265 [2024-07-15 13:59:38.688130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.265 [2024-07-15 13:59:38.688844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.265 [2024-07-15 13:59:38.688880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.265 [2024-07-15 13:59:38.688891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.265 [2024-07-15 13:59:38.689139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.265 [2024-07-15 13:59:38.689363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.265 [2024-07-15 13:59:38.689372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.265 [2024-07-15 13:59:38.689379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.265 [2024-07-15 13:59:38.692945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.265 [2024-07-15 13:59:38.701990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.265 [2024-07-15 13:59:38.702703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.265 [2024-07-15 13:59:38.702740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.265 [2024-07-15 13:59:38.702750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.265 [2024-07-15 13:59:38.702990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.265 [2024-07-15 13:59:38.703223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.265 [2024-07-15 13:59:38.703232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.265 [2024-07-15 13:59:38.703240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.265 [2024-07-15 13:59:38.706803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.265 [2024-07-15 13:59:38.715831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.265 [2024-07-15 13:59:38.716510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.265 [2024-07-15 13:59:38.716547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.265 [2024-07-15 13:59:38.716558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.265 [2024-07-15 13:59:38.716797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.265 [2024-07-15 13:59:38.717021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.265 [2024-07-15 13:59:38.717029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.265 [2024-07-15 13:59:38.717037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.265 [2024-07-15 13:59:38.720611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.265 [2024-07-15 13:59:38.729641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.265 [2024-07-15 13:59:38.730224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.265 [2024-07-15 13:59:38.730262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.265 [2024-07-15 13:59:38.730274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.265 [2024-07-15 13:59:38.730517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.265 [2024-07-15 13:59:38.730741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.265 [2024-07-15 13:59:38.730749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.265 [2024-07-15 13:59:38.730757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.265 [2024-07-15 13:59:38.734331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.265 [2024-07-15 13:59:38.743572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.265 [2024-07-15 13:59:38.744399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.265 [2024-07-15 13:59:38.744436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.265 [2024-07-15 13:59:38.744447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.265 [2024-07-15 13:59:38.744691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.265 [2024-07-15 13:59:38.744914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.265 [2024-07-15 13:59:38.744923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.265 [2024-07-15 13:59:38.744930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1270175 Killed "${NVMF_APP[@]}" "$@" 00:29:12.265 [2024-07-15 13:59:38.748501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.265 13:59:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:12.265 13:59:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:12.265 13:59:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:12.265 13:59:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:12.265 13:59:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:12.265 13:59:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1271858 00:29:12.265 [2024-07-15 13:59:38.757533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.265 13:59:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1271858 00:29:12.265 13:59:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:12.265 13:59:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1271858 ']' 00:29:12.265 [2024-07-15 13:59:38.758226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.265 [2024-07-15 13:59:38.758263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.265 [2024-07-15 13:59:38.758273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.265 13:59:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.265 [2024-07-15 13:59:38.758512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.265 13:59:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:12.265 [2024-07-15 13:59:38.758736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.265 [2024-07-15 13:59:38.758745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.265 [2024-07-15 13:59:38.758753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.265 13:59:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.265 13:59:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:12.265 13:59:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:12.265 [2024-07-15 13:59:38.762326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.265 [2024-07-15 13:59:38.771357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.265 [2024-07-15 13:59:38.772105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.265 [2024-07-15 13:59:38.772148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.265 [2024-07-15 13:59:38.772161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.265 [2024-07-15 13:59:38.772407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.265 [2024-07-15 13:59:38.772631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.265 [2024-07-15 13:59:38.772640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.265 [2024-07-15 13:59:38.772647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.265 [2024-07-15 13:59:38.776213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.266 [2024-07-15 13:59:38.785246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.266 [2024-07-15 13:59:38.785884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.266 [2024-07-15 13:59:38.785902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.266 [2024-07-15 13:59:38.785910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.266 [2024-07-15 13:59:38.786136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.266 [2024-07-15 13:59:38.786357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.266 [2024-07-15 13:59:38.786365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.266 [2024-07-15 13:59:38.786371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.528 [2024-07-15 13:59:38.789926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.528 [2024-07-15 13:59:38.799180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.528 [2024-07-15 13:59:38.799800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.528 [2024-07-15 13:59:38.799815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.528 [2024-07-15 13:59:38.799824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.528 [2024-07-15 13:59:38.800044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.528 [2024-07-15 13:59:38.800271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.528 [2024-07-15 13:59:38.800281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.528 [2024-07-15 13:59:38.800289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.528 [2024-07-15 13:59:38.803845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.528 [2024-07-15 13:59:38.807568] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:12.528 [2024-07-15 13:59:38.807612] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.528 [2024-07-15 13:59:38.813085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.528 [2024-07-15 13:59:38.813802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.528 [2024-07-15 13:59:38.813839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.528 [2024-07-15 13:59:38.813850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.528 [2024-07-15 13:59:38.814091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.528 [2024-07-15 13:59:38.814327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.528 [2024-07-15 13:59:38.814337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.528 [2024-07-15 13:59:38.814345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.528 [2024-07-15 13:59:38.817909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.528 [2024-07-15 13:59:38.826943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.528 [2024-07-15 13:59:38.827432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.528 [2024-07-15 13:59:38.827451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.528 [2024-07-15 13:59:38.827459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.528 [2024-07-15 13:59:38.827679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.528 [2024-07-15 13:59:38.827900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.528 [2024-07-15 13:59:38.827908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.528 [2024-07-15 13:59:38.827915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.528 [2024-07-15 13:59:38.831477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.528 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.528 [2024-07-15 13:59:38.840926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.528 [2024-07-15 13:59:38.841562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.528 [2024-07-15 13:59:38.841578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.528 [2024-07-15 13:59:38.841586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.528 [2024-07-15 13:59:38.841805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.528 [2024-07-15 13:59:38.842025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.528 [2024-07-15 13:59:38.842034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.528 [2024-07-15 13:59:38.842041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.528 [2024-07-15 13:59:38.845599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.528 [2024-07-15 13:59:38.854839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.528 [2024-07-15 13:59:38.855563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.528 [2024-07-15 13:59:38.855600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.528 [2024-07-15 13:59:38.855611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.528 [2024-07-15 13:59:38.855851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.528 [2024-07-15 13:59:38.856075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.528 [2024-07-15 13:59:38.856083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.528 [2024-07-15 13:59:38.856095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.528 [2024-07-15 13:59:38.859664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.528 [2024-07-15 13:59:38.868798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.528 [2024-07-15 13:59:38.869472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.528 [2024-07-15 13:59:38.869491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.528 [2024-07-15 13:59:38.869499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.528 [2024-07-15 13:59:38.869721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.528 [2024-07-15 13:59:38.869941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.528 [2024-07-15 13:59:38.869950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.528 [2024-07-15 13:59:38.869957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.528 [2024-07-15 13:59:38.873559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.528 [2024-07-15 13:59:38.882802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.528 [2024-07-15 13:59:38.883507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.528 [2024-07-15 13:59:38.883544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.528 [2024-07-15 13:59:38.883554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.528 [2024-07-15 13:59:38.883795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.528 [2024-07-15 13:59:38.884018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.528 [2024-07-15 13:59:38.884027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.529 [2024-07-15 13:59:38.884035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.529 [2024-07-15 13:59:38.887606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.529 [2024-07-15 13:59:38.889897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:12.529 [2024-07-15 13:59:38.896644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.529 [2024-07-15 13:59:38.897448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.529 [2024-07-15 13:59:38.897485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.529 [2024-07-15 13:59:38.897498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.529 [2024-07-15 13:59:38.897739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.529 [2024-07-15 13:59:38.897963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.529 [2024-07-15 13:59:38.897971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.529 [2024-07-15 13:59:38.897979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.529 [2024-07-15 13:59:38.901567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.529 [2024-07-15 13:59:38.910604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.529 [2024-07-15 13:59:38.911397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.529 [2024-07-15 13:59:38.911434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.529 [2024-07-15 13:59:38.911446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.529 [2024-07-15 13:59:38.911688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.529 [2024-07-15 13:59:38.911912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.529 [2024-07-15 13:59:38.911920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.529 [2024-07-15 13:59:38.911928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.529 [2024-07-15 13:59:38.915498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.529 [2024-07-15 13:59:38.924536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.529 [2024-07-15 13:59:38.925234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.529 [2024-07-15 13:59:38.925272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.529 [2024-07-15 13:59:38.925285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.529 [2024-07-15 13:59:38.925529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.529 [2024-07-15 13:59:38.925753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.529 [2024-07-15 13:59:38.925762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.529 [2024-07-15 13:59:38.925770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.529 [2024-07-15 13:59:38.929337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.529 [2024-07-15 13:59:38.938372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.529 [2024-07-15 13:59:38.938961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.529 [2024-07-15 13:59:38.938997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.529 [2024-07-15 13:59:38.939008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.529 [2024-07-15 13:59:38.939256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.529 [2024-07-15 13:59:38.939480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.529 [2024-07-15 13:59:38.939489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.529 [2024-07-15 13:59:38.939497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.529 [2024-07-15 13:59:38.943057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.529 [2024-07-15 13:59:38.943446] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.529 [2024-07-15 13:59:38.943469] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.529 [2024-07-15 13:59:38.943475] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.529 [2024-07-15 13:59:38.943480] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.529 [2024-07-15 13:59:38.943485] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.529 [2024-07-15 13:59:38.943621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:12.529 [2024-07-15 13:59:38.943778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.529 [2024-07-15 13:59:38.943780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:12.529 [2024-07-15 13:59:38.952303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.529 [2024-07-15 13:59:38.952906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.529 [2024-07-15 13:59:38.952943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.529 [2024-07-15 13:59:38.952955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.529 [2024-07-15 13:59:38.953202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.529 [2024-07-15 13:59:38.953427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.529 [2024-07-15 13:59:38.953435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.529 [2024-07-15 13:59:38.953443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.529 [2024-07-15 13:59:38.957006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.529 [2024-07-15 13:59:38.966252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.529 [2024-07-15 13:59:38.966803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.529 [2024-07-15 13:59:38.966822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.529 [2024-07-15 13:59:38.966831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.529 [2024-07-15 13:59:38.967052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.529 [2024-07-15 13:59:38.967279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.529 [2024-07-15 13:59:38.967287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.529 [2024-07-15 13:59:38.967295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.529 [2024-07-15 13:59:38.970854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.529 [2024-07-15 13:59:38.980101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.529 [2024-07-15 13:59:38.980839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.529 [2024-07-15 13:59:38.980878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.529 [2024-07-15 13:59:38.980889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.529 [2024-07-15 13:59:38.981139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.529 [2024-07-15 13:59:38.981365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.529 [2024-07-15 13:59:38.981374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.529 [2024-07-15 13:59:38.981382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.529 [2024-07-15 13:59:38.984945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.529 [2024-07-15 13:59:38.993982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.529 [2024-07-15 13:59:38.994646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.529 [2024-07-15 13:59:38.994670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.529 [2024-07-15 13:59:38.994678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.529 [2024-07-15 13:59:38.994899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.529 [2024-07-15 13:59:38.995120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.529 [2024-07-15 13:59:38.995135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.529 [2024-07-15 13:59:38.995142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.529 [2024-07-15 13:59:38.998702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.529 [2024-07-15 13:59:39.007960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.529 [2024-07-15 13:59:39.008591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.529 [2024-07-15 13:59:39.008607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.529 [2024-07-15 13:59:39.008615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.529 [2024-07-15 13:59:39.008835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.529 [2024-07-15 13:59:39.009054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.529 [2024-07-15 13:59:39.009062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.529 [2024-07-15 13:59:39.009069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.529 [2024-07-15 13:59:39.012632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.529 [2024-07-15 13:59:39.021869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.529 [2024-07-15 13:59:39.022595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.529 [2024-07-15 13:59:39.022632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.529 [2024-07-15 13:59:39.022643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.529 [2024-07-15 13:59:39.022884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.530 [2024-07-15 13:59:39.023107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.530 [2024-07-15 13:59:39.023117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.530 [2024-07-15 13:59:39.023133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.530 [2024-07-15 13:59:39.026697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.530 [2024-07-15 13:59:39.035729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.530 [2024-07-15 13:59:39.036262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.530 [2024-07-15 13:59:39.036298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.530 [2024-07-15 13:59:39.036310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.530 [2024-07-15 13:59:39.036551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.530 [2024-07-15 13:59:39.036779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.530 [2024-07-15 13:59:39.036788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.530 [2024-07-15 13:59:39.036796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.530 [2024-07-15 13:59:39.040374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.530 [2024-07-15 13:59:39.049649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.530 [2024-07-15 13:59:39.050211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.530 [2024-07-15 13:59:39.050249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.530 [2024-07-15 13:59:39.050261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.530 [2024-07-15 13:59:39.050503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.530 [2024-07-15 13:59:39.050726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.530 [2024-07-15 13:59:39.050735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.530 [2024-07-15 13:59:39.050745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.792 [2024-07-15 13:59:39.054318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.792 [2024-07-15 13:59:39.063581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.792 [2024-07-15 13:59:39.064121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.792 [2024-07-15 13:59:39.064145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.792 [2024-07-15 13:59:39.064153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.792 [2024-07-15 13:59:39.064374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.792 [2024-07-15 13:59:39.064594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.792 [2024-07-15 13:59:39.064602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.792 [2024-07-15 13:59:39.064610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.792 [2024-07-15 13:59:39.068171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.792 [2024-07-15 13:59:39.077413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.792 [2024-07-15 13:59:39.078080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.792 [2024-07-15 13:59:39.078095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.792 [2024-07-15 13:59:39.078103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.792 [2024-07-15 13:59:39.078329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.792 [2024-07-15 13:59:39.078549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.792 [2024-07-15 13:59:39.078557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.792 [2024-07-15 13:59:39.078564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.792 [2024-07-15 13:59:39.082240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.792 [2024-07-15 13:59:39.091288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.792 [2024-07-15 13:59:39.092013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.792 [2024-07-15 13:59:39.092050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.792 [2024-07-15 13:59:39.092061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.792 [2024-07-15 13:59:39.092307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.792 [2024-07-15 13:59:39.092532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.792 [2024-07-15 13:59:39.092541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.792 [2024-07-15 13:59:39.092548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.792 [2024-07-15 13:59:39.096110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.792 [2024-07-15 13:59:39.105158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.792 [2024-07-15 13:59:39.105794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.792 [2024-07-15 13:59:39.105813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.792 [2024-07-15 13:59:39.105821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.792 [2024-07-15 13:59:39.106042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.792 [2024-07-15 13:59:39.106269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.792 [2024-07-15 13:59:39.106278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.792 [2024-07-15 13:59:39.106285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.792 [2024-07-15 13:59:39.109844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.792 [2024-07-15 13:59:39.119078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.792 [2024-07-15 13:59:39.119709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.792 [2024-07-15 13:59:39.119726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.792 [2024-07-15 13:59:39.119734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.792 [2024-07-15 13:59:39.119954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.792 [2024-07-15 13:59:39.120180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.792 [2024-07-15 13:59:39.120189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.792 [2024-07-15 13:59:39.120197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.792 [2024-07-15 13:59:39.123758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.792 [2024-07-15 13:59:39.132996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.792 [2024-07-15 13:59:39.133720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.792 [2024-07-15 13:59:39.133758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.792 [2024-07-15 13:59:39.133773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.792 [2024-07-15 13:59:39.134013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.792 [2024-07-15 13:59:39.134244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.792 [2024-07-15 13:59:39.134253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.792 [2024-07-15 13:59:39.134261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.792 [2024-07-15 13:59:39.137827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.792 [2024-07-15 13:59:39.146856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.792 [2024-07-15 13:59:39.147499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.792 [2024-07-15 13:59:39.147519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.792 [2024-07-15 13:59:39.147527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.792 [2024-07-15 13:59:39.147748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.792 [2024-07-15 13:59:39.147967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.792 [2024-07-15 13:59:39.147975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.792 [2024-07-15 13:59:39.147982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.792 [2024-07-15 13:59:39.151544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.792 [2024-07-15 13:59:39.160782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.792 [2024-07-15 13:59:39.161409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.792 [2024-07-15 13:59:39.161425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.792 [2024-07-15 13:59:39.161433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.792 [2024-07-15 13:59:39.161653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.792 [2024-07-15 13:59:39.161872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.792 [2024-07-15 13:59:39.161881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.792 [2024-07-15 13:59:39.161887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.792 [2024-07-15 13:59:39.165449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.792 [2024-07-15 13:59:39.174685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.792 [2024-07-15 13:59:39.175366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.792 [2024-07-15 13:59:39.175404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.792 [2024-07-15 13:59:39.175414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.793 [2024-07-15 13:59:39.175654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.793 [2024-07-15 13:59:39.175878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.793 [2024-07-15 13:59:39.175891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.793 [2024-07-15 13:59:39.175899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.793 [2024-07-15 13:59:39.179470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.793 [2024-07-15 13:59:39.188507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.793 [2024-07-15 13:59:39.189225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.793 [2024-07-15 13:59:39.189262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.793 [2024-07-15 13:59:39.189273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.793 [2024-07-15 13:59:39.189513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.793 [2024-07-15 13:59:39.189737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.793 [2024-07-15 13:59:39.189745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.793 [2024-07-15 13:59:39.189753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.793 [2024-07-15 13:59:39.193325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.793 [2024-07-15 13:59:39.202369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.793 [2024-07-15 13:59:39.203141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.793 [2024-07-15 13:59:39.203177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.793 [2024-07-15 13:59:39.203188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.793 [2024-07-15 13:59:39.203428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.793 [2024-07-15 13:59:39.203652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.793 [2024-07-15 13:59:39.203660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.793 [2024-07-15 13:59:39.203668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.793 [2024-07-15 13:59:39.207235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.793 [2024-07-15 13:59:39.216271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.793 [2024-07-15 13:59:39.216736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.793 [2024-07-15 13:59:39.216755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.793 [2024-07-15 13:59:39.216763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.793 [2024-07-15 13:59:39.216983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.793 [2024-07-15 13:59:39.217208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.793 [2024-07-15 13:59:39.217253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.793 [2024-07-15 13:59:39.217260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.793 [2024-07-15 13:59:39.220821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.793 [2024-07-15 13:59:39.230281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.793 [2024-07-15 13:59:39.231007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.793 [2024-07-15 13:59:39.231044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.793 [2024-07-15 13:59:39.231055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.793 [2024-07-15 13:59:39.231304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.793 [2024-07-15 13:59:39.231529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.793 [2024-07-15 13:59:39.231537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.793 [2024-07-15 13:59:39.231545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.793 [2024-07-15 13:59:39.235106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.793 [2024-07-15 13:59:39.244150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.793 [2024-07-15 13:59:39.244720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.793 [2024-07-15 13:59:39.244757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.793 [2024-07-15 13:59:39.244769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.793 [2024-07-15 13:59:39.245010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.793 [2024-07-15 13:59:39.245241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.793 [2024-07-15 13:59:39.245250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.793 [2024-07-15 13:59:39.245258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.793 [2024-07-15 13:59:39.248824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.793 [2024-07-15 13:59:39.258074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.793 [2024-07-15 13:59:39.258754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.793 [2024-07-15 13:59:39.258772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.793 [2024-07-15 13:59:39.258780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.793 [2024-07-15 13:59:39.259000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.793 [2024-07-15 13:59:39.259227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.793 [2024-07-15 13:59:39.259235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.793 [2024-07-15 13:59:39.259242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.793 [2024-07-15 13:59:39.262801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.793 [2024-07-15 13:59:39.272038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.793 [2024-07-15 13:59:39.272666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.793 [2024-07-15 13:59:39.272683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.793 [2024-07-15 13:59:39.272690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.793 [2024-07-15 13:59:39.272915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.793 [2024-07-15 13:59:39.273140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.793 [2024-07-15 13:59:39.273149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.793 [2024-07-15 13:59:39.273156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.793 [2024-07-15 13:59:39.276714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.793 [2024-07-15 13:59:39.285954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.793 [2024-07-15 13:59:39.286570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.793 [2024-07-15 13:59:39.286586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.793 [2024-07-15 13:59:39.286595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.793 [2024-07-15 13:59:39.286814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.793 [2024-07-15 13:59:39.287035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.793 [2024-07-15 13:59:39.287042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.793 [2024-07-15 13:59:39.287049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.793 [2024-07-15 13:59:39.290640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.793 [2024-07-15 13:59:39.299892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.793 [2024-07-15 13:59:39.300603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.793 [2024-07-15 13:59:39.300640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.793 [2024-07-15 13:59:39.300652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.793 [2024-07-15 13:59:39.300891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.793 [2024-07-15 13:59:39.301115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.793 [2024-07-15 13:59:39.301133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.793 [2024-07-15 13:59:39.301141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.793 [2024-07-15 13:59:39.304705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.793 [2024-07-15 13:59:39.313733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.793 [2024-07-15 13:59:39.314469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.793 [2024-07-15 13:59:39.314506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:12.793 [2024-07-15 13:59:39.314518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:12.793 [2024-07-15 13:59:39.314758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:12.793 [2024-07-15 13:59:39.314981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.793 [2024-07-15 13:59:39.314990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.793 [2024-07-15 13:59:39.315005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.055 [2024-07-15 13:59:39.318575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.055 [2024-07-15 13:59:39.327608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.055 [2024-07-15 13:59:39.328387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.055 [2024-07-15 13:59:39.328424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.055 [2024-07-15 13:59:39.328435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.055 [2024-07-15 13:59:39.328674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.055 [2024-07-15 13:59:39.328897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.055 [2024-07-15 13:59:39.328906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.055 [2024-07-15 13:59:39.328914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.055 [2024-07-15 13:59:39.332488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.055 [2024-07-15 13:59:39.341523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.055 [2024-07-15 13:59:39.342233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.055 [2024-07-15 13:59:39.342270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.055 [2024-07-15 13:59:39.342282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.055 [2024-07-15 13:59:39.342526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.055 [2024-07-15 13:59:39.342749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.055 [2024-07-15 13:59:39.342757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.055 [2024-07-15 13:59:39.342765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.055 [2024-07-15 13:59:39.346334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.055 [2024-07-15 13:59:39.355363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.055 [2024-07-15 13:59:39.355878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.055 [2024-07-15 13:59:39.355896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.055 [2024-07-15 13:59:39.355904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.055 [2024-07-15 13:59:39.356132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.055 [2024-07-15 13:59:39.356354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.055 [2024-07-15 13:59:39.356361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.055 [2024-07-15 13:59:39.356368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.055 [2024-07-15 13:59:39.359925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.055 [2024-07-15 13:59:39.369375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.055 [2024-07-15 13:59:39.369927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.055 [2024-07-15 13:59:39.369964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.055 [2024-07-15 13:59:39.369974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.055 [2024-07-15 13:59:39.370221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.055 [2024-07-15 13:59:39.370446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.055 [2024-07-15 13:59:39.370455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.055 [2024-07-15 13:59:39.370463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.055 [2024-07-15 13:59:39.374026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.055 [2024-07-15 13:59:39.383269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.055 [2024-07-15 13:59:39.384032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.055 [2024-07-15 13:59:39.384069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.056 [2024-07-15 13:59:39.384080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.056 [2024-07-15 13:59:39.384328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.056 [2024-07-15 13:59:39.384553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.056 [2024-07-15 13:59:39.384561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.056 [2024-07-15 13:59:39.384569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.056 [2024-07-15 13:59:39.388135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.056 [2024-07-15 13:59:39.397166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.056 [2024-07-15 13:59:39.397901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.056 [2024-07-15 13:59:39.397937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.056 [2024-07-15 13:59:39.397949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.056 [2024-07-15 13:59:39.398196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.056 [2024-07-15 13:59:39.398421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.056 [2024-07-15 13:59:39.398429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.056 [2024-07-15 13:59:39.398437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.056 [2024-07-15 13:59:39.402010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.056 [2024-07-15 13:59:39.411042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.056 [2024-07-15 13:59:39.411687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.056 [2024-07-15 13:59:39.411706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.056 [2024-07-15 13:59:39.411714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.056 [2024-07-15 13:59:39.411936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.056 [2024-07-15 13:59:39.412165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.056 [2024-07-15 13:59:39.412174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.056 [2024-07-15 13:59:39.412181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.056 [2024-07-15 13:59:39.415739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.056 [2024-07-15 13:59:39.424974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.056 [2024-07-15 13:59:39.425401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.056 [2024-07-15 13:59:39.425418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.056 [2024-07-15 13:59:39.425425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.056 [2024-07-15 13:59:39.425645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.056 [2024-07-15 13:59:39.425865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.056 [2024-07-15 13:59:39.425874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.056 [2024-07-15 13:59:39.425881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.056 [2024-07-15 13:59:39.429444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.056 [2024-07-15 13:59:39.438895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.056 [2024-07-15 13:59:39.439448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.056 [2024-07-15 13:59:39.439464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.056 [2024-07-15 13:59:39.439471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.056 [2024-07-15 13:59:39.439691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.056 [2024-07-15 13:59:39.439911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.056 [2024-07-15 13:59:39.439919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.056 [2024-07-15 13:59:39.439926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.056 [2024-07-15 13:59:39.443487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.056 [2024-07-15 13:59:39.452724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.056 [2024-07-15 13:59:39.453499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.056 [2024-07-15 13:59:39.453536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.056 [2024-07-15 13:59:39.453547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.056 [2024-07-15 13:59:39.453787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.056 [2024-07-15 13:59:39.454011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.056 [2024-07-15 13:59:39.454019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.056 [2024-07-15 13:59:39.454027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.056 [2024-07-15 13:59:39.457602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.056 [2024-07-15 13:59:39.466729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.056 [2024-07-15 13:59:39.467376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.056 [2024-07-15 13:59:39.467395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.056 [2024-07-15 13:59:39.467403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.056 [2024-07-15 13:59:39.467624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.056 [2024-07-15 13:59:39.467844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.056 [2024-07-15 13:59:39.467851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.056 [2024-07-15 13:59:39.467858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.056 [2024-07-15 13:59:39.471422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.056 [2024-07-15 13:59:39.480661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.056 [2024-07-15 13:59:39.481437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.056 [2024-07-15 13:59:39.481475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.056 [2024-07-15 13:59:39.481486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.056 [2024-07-15 13:59:39.481726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.056 [2024-07-15 13:59:39.481949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.056 [2024-07-15 13:59:39.481958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.056 [2024-07-15 13:59:39.481965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.056 [2024-07-15 13:59:39.485536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.056 [2024-07-15 13:59:39.494569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.056 [2024-07-15 13:59:39.495404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.056 [2024-07-15 13:59:39.495442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.056 [2024-07-15 13:59:39.495454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.056 [2024-07-15 13:59:39.495695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.056 [2024-07-15 13:59:39.495919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.056 [2024-07-15 13:59:39.495927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.056 [2024-07-15 13:59:39.495935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.056 [2024-07-15 13:59:39.499516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.056 [2024-07-15 13:59:39.508555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.056 [2024-07-15 13:59:39.509225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.056 [2024-07-15 13:59:39.509263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.056 [2024-07-15 13:59:39.509278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.056 [2024-07-15 13:59:39.509518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.056 [2024-07-15 13:59:39.509741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.056 [2024-07-15 13:59:39.509750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.056 [2024-07-15 13:59:39.509759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.056 [2024-07-15 13:59:39.513330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.056 [2024-07-15 13:59:39.522367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.056 [2024-07-15 13:59:39.523152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.056 [2024-07-15 13:59:39.523189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.056 [2024-07-15 13:59:39.523201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.056 [2024-07-15 13:59:39.523445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.056 [2024-07-15 13:59:39.523669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.056 [2024-07-15 13:59:39.523678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.056 [2024-07-15 13:59:39.523685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.056 [2024-07-15 13:59:39.527255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.057 [2024-07-15 13:59:39.536292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.057 [2024-07-15 13:59:39.536751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.057 [2024-07-15 13:59:39.536769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.057 [2024-07-15 13:59:39.536777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.057 [2024-07-15 13:59:39.536998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.057 [2024-07-15 13:59:39.537226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.057 [2024-07-15 13:59:39.537235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.057 [2024-07-15 13:59:39.537242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.057 [2024-07-15 13:59:39.540800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.057 [2024-07-15 13:59:39.550247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.057 [2024-07-15 13:59:39.550998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.057 [2024-07-15 13:59:39.551035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.057 [2024-07-15 13:59:39.551046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.057 [2024-07-15 13:59:39.551293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.057 [2024-07-15 13:59:39.551517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.057 [2024-07-15 13:59:39.551530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.057 [2024-07-15 13:59:39.551538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.057 [2024-07-15 13:59:39.555098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.057 [2024-07-15 13:59:39.564135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.057 [2024-07-15 13:59:39.564902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.057 [2024-07-15 13:59:39.564939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.057 [2024-07-15 13:59:39.564950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.057 [2024-07-15 13:59:39.565197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.057 [2024-07-15 13:59:39.565421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.057 [2024-07-15 13:59:39.565429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.057 [2024-07-15 13:59:39.565437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.057 [2024-07-15 13:59:39.569002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.057 [2024-07-15 13:59:39.578034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.057 [2024-07-15 13:59:39.578710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.057 [2024-07-15 13:59:39.578748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.057 [2024-07-15 13:59:39.578759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.057 13:59:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:13.057 [2024-07-15 13:59:39.578999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.057 13:59:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:13.057 [2024-07-15 13:59:39.579230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.057 [2024-07-15 13:59:39.579239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.057 [2024-07-15 13:59:39.579247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.057 13:59:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:13.057 13:59:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:13.319 13:59:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:13.319 [2024-07-15 13:59:39.582812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.319 [2024-07-15 13:59:39.591847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.319 [2024-07-15 13:59:39.592428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-07-15 13:59:39.592466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.319 [2024-07-15 13:59:39.592478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.319 [2024-07-15 13:59:39.592720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.319 [2024-07-15 13:59:39.592945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.319 [2024-07-15 13:59:39.592959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.319 [2024-07-15 13:59:39.592968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.319 [2024-07-15 13:59:39.596539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.319 [2024-07-15 13:59:39.605796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.319 [2024-07-15 13:59:39.606379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-07-15 13:59:39.606398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.319 [2024-07-15 13:59:39.606406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.319 [2024-07-15 13:59:39.606626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.320 [2024-07-15 13:59:39.606846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.320 [2024-07-15 13:59:39.606854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.320 [2024-07-15 13:59:39.606861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.320 [2024-07-15 13:59:39.610426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:13.320 [2024-07-15 13:59:39.619662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.320 [2024-07-15 13:59:39.620205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-07-15 13:59:39.620242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.320 [2024-07-15 13:59:39.620255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.320 [2024-07-15 13:59:39.620497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.320 [2024-07-15 13:59:39.620721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.320 [2024-07-15 13:59:39.620729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.320 [2024-07-15 13:59:39.620737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.320 [2024-07-15 13:59:39.622457] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.320 [2024-07-15 13:59:39.624311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:13.320 [2024-07-15 13:59:39.633554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.320 [2024-07-15 13:59:39.634262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-07-15 13:59:39.634299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.320 [2024-07-15 13:59:39.634315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.320 [2024-07-15 13:59:39.634555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.320 [2024-07-15 13:59:39.634779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.320 [2024-07-15 13:59:39.634788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.320 [2024-07-15 13:59:39.634796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.320 [2024-07-15 13:59:39.638369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.320 [2024-07-15 13:59:39.647404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.320 [2024-07-15 13:59:39.648178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-07-15 13:59:39.648215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.320 [2024-07-15 13:59:39.648228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.320 [2024-07-15 13:59:39.648469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.320 [2024-07-15 13:59:39.648693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.320 [2024-07-15 13:59:39.648701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.320 [2024-07-15 13:59:39.648709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.320 [2024-07-15 13:59:39.652283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.320 Malloc0 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:13.320 [2024-07-15 13:59:39.661315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.320 [2024-07-15 13:59:39.662035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-07-15 13:59:39.662072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.320 [2024-07-15 13:59:39.662085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.320 [2024-07-15 13:59:39.662337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.320 [2024-07-15 13:59:39.662562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.320 [2024-07-15 13:59:39.662571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.320 [2024-07-15 13:59:39.662579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.320 [2024-07-15 13:59:39.666147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:13.320 [2024-07-15 13:59:39.675182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.320 [2024-07-15 13:59:39.675824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-07-15 13:59:39.675842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.320 [2024-07-15 13:59:39.675850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.320 [2024-07-15 13:59:39.676070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.320 [2024-07-15 13:59:39.676297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.320 [2024-07-15 13:59:39.676307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.320 [2024-07-15 13:59:39.676315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.320 [2024-07-15 13:59:39.679872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:13.320 [2024-07-15 13:59:39.689109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.320 [2024-07-15 13:59:39.689727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-07-15 13:59:39.689743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa023b0 with addr=10.0.0.2, port=4420 00:29:13.320 [2024-07-15 13:59:39.689750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa023b0 is same with the state(5) to be set 00:29:13.320 [2024-07-15 13:59:39.689970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa023b0 (9): Bad file descriptor 00:29:13.320 [2024-07-15 13:59:39.690195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.320 [2024-07-15 13:59:39.690203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.320 [2024-07-15 13:59:39.690210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.320 [2024-07-15 13:59:39.690472] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.320 [2024-07-15 13:59:39.693766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.320 13:59:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1270837 00:29:13.320 [2024-07-15 13:59:39.703014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.581 [2024-07-15 13:59:39.868473] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:21.714 00:29:21.714 Latency(us) 00:29:21.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.714 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:21.714 Verification LBA range: start 0x0 length 0x4000 00:29:21.714 Nvme1n1 : 15.00 8357.87 32.65 9961.72 0.00 6962.01 1058.13 17585.49 00:29:21.714 =================================================================================================================== 00:29:21.714 Total : 8357.87 32.65 9961.72 0.00 6962.01 1058.13 17585.49 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:21.975 rmmod nvme_tcp 00:29:21.975 rmmod nvme_fabrics 00:29:21.975 rmmod nvme_keyring 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1271858 ']' 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1271858 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1271858 ']' 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1271858 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1271858 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1271858' 00:29:21.975 killing process with pid 1271858 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1271858 00:29:21.975 13:59:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1271858 00:29:22.236 13:59:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:22.236 13:59:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:22.236 13:59:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:22.236 13:59:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:22.236 13:59:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:22.236 13:59:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.236 13:59:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:22.236 13:59:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.146 13:59:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:24.146 00:29:24.146 real 0m27.833s 00:29:24.146 user 1m3.253s 00:29:24.146 sys 0m7.059s 00:29:24.146 13:59:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:24.146 13:59:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.146 ************************************ 00:29:24.146 END TEST nvmf_bdevperf 00:29:24.146 ************************************ 00:29:24.407 13:59:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:24.407 13:59:50 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:24.407 13:59:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:24.407 13:59:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.407 13:59:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:24.407 ************************************ 00:29:24.407 START TEST nvmf_target_disconnect 00:29:24.407 ************************************ 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:24.407 * Looking for test storage... 00:29:24.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:24.407 13:59:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:32.548 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:32.548 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:32.548 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:32.548 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:32.548 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:32.548 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:32.548 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:32.548 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:32.548 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:32.548 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:32.548 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:32.548 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:32.548 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:32.548 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:32.548 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:32.548 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:32.548 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:32.549 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:32.549 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:32.549 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:32.549 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:32.549 13:59:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:32.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:29:32.549 00:29:32.549 --- 10.0.0.2 ping statistics --- 00:29:32.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.549 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:32.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:29:32.549 00:29:32.549 --- 10.0.0.1 ping statistics --- 00:29:32.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.549 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:32.549 ************************************ 00:29:32.549 START TEST nvmf_target_disconnect_tc1 00:29:32.549 ************************************ 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:32.549 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:32.549 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.549 [2024-07-15 13:59:58.203183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.549 [2024-07-15 13:59:58.203251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d66e20 with addr=10.0.0.2, port=4420 00:29:32.550 [2024-07-15 13:59:58.203279] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:32.550 [2024-07-15 13:59:58.203291] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:32.550 [2024-07-15 13:59:58.203298] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:32.550 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:32.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:32.550 Initializing NVMe Controllers 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:32.550 00:29:32.550 real 0m0.109s 00:29:32.550 user 0m0.055s 00:29:32.550 sys 0m0.053s 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:32.550 ************************************ 00:29:32.550 END TEST nvmf_target_disconnect_tc1 00:29:32.550 ************************************ 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:32.550 ************************************ 00:29:32.550 START TEST nvmf_target_disconnect_tc2 00:29:32.550 ************************************ 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1277896 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1277896 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1277896 ']' 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:32.550 13:59:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.550 [2024-07-15 13:59:58.355413] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:32.550 [2024-07-15 13:59:58.355503] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.550 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.550 [2024-07-15 13:59:58.446911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:32.550 [2024-07-15 13:59:58.540875] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.550 [2024-07-15 13:59:58.540932] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.550 [2024-07-15 13:59:58.540940] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.550 [2024-07-15 13:59:58.540947] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.550 [2024-07-15 13:59:58.540954] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.550 [2024-07-15 13:59:58.541181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:32.550 [2024-07-15 13:59:58.541301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:32.550 [2024-07-15 13:59:58.541613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:32.550 [2024-07-15 13:59:58.541615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:32.812 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:32.812 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:32.812 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.813 Malloc0 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.813 [2024-07-15 13:59:59.228255] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.813 [2024-07-15 13:59:59.268630] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1278174 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:32.813 13:59:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:33.073 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.991 14:00:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1277896 00:29:34.991 14:00:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Write completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Write completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Write completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 Read completed with error (sct=0, sc=8) 00:29:34.991 starting I/O failed 00:29:34.991 [2024-07-15 14:00:01.301511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.991 [2024-07-15 14:00:01.301991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.302009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.302564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.302601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.302831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.302845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.303359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.303396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.303822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.303834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.304311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.304348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.304790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.304803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.305099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.305110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.305593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.305630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.306046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.306059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.306617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.306654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.307052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.307065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.307503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.307540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.307954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.307966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.308457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.308493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.308916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.308929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.309331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.309367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.309639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.309652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.310049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.310059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.310469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.310480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.310820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.310830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.991 qpair failed and we were unable to recover it. 00:29:34.991 [2024-07-15 14:00:01.311216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.991 [2024-07-15 14:00:01.311227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.311609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.311619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.312038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.312049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.312470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.312480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.312809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.312819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.313207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.313217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.313604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.313613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.314035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.314044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.314319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.314329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.314717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.314726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.315103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.315112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.315381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.315391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.315844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.315853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.316071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.316083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.316299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.316309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.316808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.316821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.317146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.317156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.317511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.317521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.317890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.317900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.318212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.318222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.318609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.318619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.318837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.318847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.319203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.319214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.319413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.319422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.319803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.319812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.320136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.320146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.320502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.320511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.320879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.320889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.321270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.321281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.321732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.321742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.322078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.322087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.322480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.322489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.322814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.322824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.323157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.323166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.323570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.323579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.324003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.992 [2024-07-15 14:00:01.324013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.992 qpair failed and we were unable to recover it. 00:29:34.992 [2024-07-15 14:00:01.324391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.324401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.324779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.324789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.325205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.325214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.325596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.325606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.325975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.325985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.326311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.326321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.326704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.326714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.327132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.327142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.327477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.327487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.327841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.327851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.328225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.328236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.328548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.328557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.328919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.328929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.329254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.329263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.329660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.329669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.330086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.330095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.330530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.330540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.330903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.330914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.331393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.331405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.331819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.331833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.332226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.332239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.332614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.332626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.332901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.332913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.333319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.333332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.333687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.333699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.334025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.334037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.334403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.334415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.334787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.334798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.335212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.335224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.335435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.335448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.335839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.335851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.336146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.336158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.336529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.336540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.336998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.337009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.337385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.337398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.337760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.337772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.338095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.338106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.338379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.338392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.338765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.338777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.339189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.993 [2024-07-15 14:00:01.339201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.993 qpair failed and we were unable to recover it. 00:29:34.993 [2024-07-15 14:00:01.339586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.339597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.339974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.339985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.340372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.340384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.340724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.340735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.341120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.341137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.341520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.341532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.341810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.341823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.342223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.342240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.342615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.342632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.342988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.343004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.343392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.343408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.343816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.343832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.344280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.344297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.344616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.344632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.345007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.345023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.345400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.345417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.345828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.345843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.346213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.346229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.346709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.346724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.347102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.347118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.347584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.347600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.348004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.348020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.348436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.348453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.348694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.348713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.349058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.349074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.349473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.349490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.349853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.349869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.350282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.350298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.350717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.350733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.351148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.351164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.351465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.351480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.351851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.351867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.352250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.352267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.352584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.352600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.352987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.353003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.353297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.994 [2024-07-15 14:00:01.353313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.994 qpair failed and we were unable to recover it. 00:29:34.994 [2024-07-15 14:00:01.353714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.353730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.354106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.354126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.354520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.354536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.354861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.354876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.355208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.355225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.355607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.355623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.356060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.356076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.356470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.356491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.356902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.356922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.357352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.357373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.357763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.357787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.358195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.358216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.358519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.358538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.358977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.358997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.359408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.359429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.359752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.359771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.360207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.360228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.360642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.360662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.361088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.361108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.361517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.361538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.361950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.361970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.362260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.362288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.362702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.362723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.363175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.363196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.363585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.995 [2024-07-15 14:00:01.363606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.995 qpair failed and we were unable to recover it. 00:29:34.995 [2024-07-15 14:00:01.364036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.364056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.364521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.364542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.364923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.364943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.365347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.365367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.365780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.365800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.366237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.366258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.366575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.366595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.367012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.367040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.367452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.367480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.367881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.367908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.368333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.368362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.368780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.368808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.369239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.369267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.369690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.369718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.370138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.370166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.370614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.370641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.371049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.371076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.371511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.371540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.371882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.371910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.372362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.372391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.372799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.372826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.373237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.373266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.373683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.373711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.374140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.374170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.374560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.996 [2024-07-15 14:00:01.374587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.996 qpair failed and we were unable to recover it. 00:29:34.996 [2024-07-15 14:00:01.375021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.375054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.375470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.375499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.375944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.375971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.376296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.376324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.376713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.376742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.377250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.377278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.377693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.377720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.378109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.378145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.378576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.378603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.379015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.379043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.379458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.379486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.379953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.379981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.380419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.380448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.380759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.380791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.381206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.381234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.381663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.381690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.382121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.382161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.382586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.382614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.383030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.383058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.383486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.383515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.383937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.383964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.384466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.384495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.384914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.384941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.385379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.385407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.385776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.385803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.386121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.386157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.386611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.997 [2024-07-15 14:00:01.386639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.997 qpair failed and we were unable to recover it. 00:29:34.997 [2024-07-15 14:00:01.386969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.386997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.387392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.387421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.387817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.387845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.388244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.388273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.388708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.388735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.389056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.389083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.389507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.389536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.389962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.389989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.390448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.390476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.390884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.390911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.391345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.391373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.391798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.391826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.392261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.392290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.392694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.392726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.393061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.393089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.393533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.393562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.393977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.394003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.394466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.394495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.394904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.394931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.395332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.395360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.395770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.395797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.396213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.396242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.396670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.396697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.397140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.397169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.397630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.397657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.398084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.398111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.398571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.998 [2024-07-15 14:00:01.398599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.998 qpair failed and we were unable to recover it. 00:29:34.998 [2024-07-15 14:00:01.399012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.399039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.399456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.399485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.399832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.399860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.400276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.400304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.400727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.400754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.401185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.401214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.401540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.401567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.402024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.402052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.402478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.402507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.402940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.402968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.403392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.403420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.403843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.403870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.404296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.404324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.404762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.404790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.405204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.405232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.405656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.405684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.406115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.406151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.406489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.406517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.406964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.406991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.407404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.407433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.407845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.407872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.408310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.408338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.408778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.408805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.409132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.409164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.409619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.409646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.410141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.999 [2024-07-15 14:00:01.410170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:34.999 qpair failed and we were unable to recover it. 00:29:34.999 [2024-07-15 14:00:01.410612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.410646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.410958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.410985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.411368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.411397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.411831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.411859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.412275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.412304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.412710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.412737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.413054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.413081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.413522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.413550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.413970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.413998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.414427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.414455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.414871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.414899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.415332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.415363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.415680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.415706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.416136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.416165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.416595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.416623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.417045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.417072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.417459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.417487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.417911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.417938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.418354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.418382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.418796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.418823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.419159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.419188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.419606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.419633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.420046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.420073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.420477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.420506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.420944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.420972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.421410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.000 [2024-07-15 14:00:01.421438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.000 qpair failed and we were unable to recover it. 00:29:35.000 [2024-07-15 14:00:01.421864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.421891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.422327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.422355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.422765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.422792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.423205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.423234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.423584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.423611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.424059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.424086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.424396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.424425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.424854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.424881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.425313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.425341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.425767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.425795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.426172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.426200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.426705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.426733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.427160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.427188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.427624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.427651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.428095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.428146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.428479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.428516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.428952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.428980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.429397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.429425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.429856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.429883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.430325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.430353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.430766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.430795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.431219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.431247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.431674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.431701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.432013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.432043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.432391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.432418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.432843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.432870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.433288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.001 [2024-07-15 14:00:01.433316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.001 qpair failed and we were unable to recover it. 00:29:35.001 [2024-07-15 14:00:01.433747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.433774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.434250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.434279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.434727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.434754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.435178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.435206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.435514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.435541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.435968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.435995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.436416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.436445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.436889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.436916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.437356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.437385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.437789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.437817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.438224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.438253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.438690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.438717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.439145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.439173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.439609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.439636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.440092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.440119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.440585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.440612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.441022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.441049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.441483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.441511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.441934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.441961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.442415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.442443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.442891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.442918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.443229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.443260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.443741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.443768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.002 qpair failed and we were unable to recover it. 00:29:35.002 [2024-07-15 14:00:01.444348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.002 [2024-07-15 14:00:01.444436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.444940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.444975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.445411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.445442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.445840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.445869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.446292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.446333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.446761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.446788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.447220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.447250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.447691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.447719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.448155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.448184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.448629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.448656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.449099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.449137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.449567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.449595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.449998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.450025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.450448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.450477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.450892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.450919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.451252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.451280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.451586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.451613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.452044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.452072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.452493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.452522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.452946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.452973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.453392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.453421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.453808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.453835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.454266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.454295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.454719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.454746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.455184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.455212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.455641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.455668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.456085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.456112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.456576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.003 [2024-07-15 14:00:01.456603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.003 qpair failed and we were unable to recover it. 00:29:35.003 [2024-07-15 14:00:01.457032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.457059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.457506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.457535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.457967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.457995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.458320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.458358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.458791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.458818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.459221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.459250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.459570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.459596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.459949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.459976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.460411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.460439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.460870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.460897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.461343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.461372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.461790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.461816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.462142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.462171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.462587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.462614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.463073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.463100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.463422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.463455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.463884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.463919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.464354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.464383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.464712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.464742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.465175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.465204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.465652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.465679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.466106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.466143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.466551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.466578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.466992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.467020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.467433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.467462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.467771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.467802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.468255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.468284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.468598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.468626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.469069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.469096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.469543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.469571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.470005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.470033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.470482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.470510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.470957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.470985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.004 [2024-07-15 14:00:01.471396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.004 [2024-07-15 14:00:01.471424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.004 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.471841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.471868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.472285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.472313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.472655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.472683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.473092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.473119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.473566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.473593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.474004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.474030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.474484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.474513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.474732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.474762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.475200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.475228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.475675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.475704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.476118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.476155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.476568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.476595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.477023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.477050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.477424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.477452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.477872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.477899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.478340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.478368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.478687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.478718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.479141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.479169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.479619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.479648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.480055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.480083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.480511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.480539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.480950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.480977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.481391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.481426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.481845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.481872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.482270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.482299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.482738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.482765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.005 qpair failed and we were unable to recover it. 00:29:35.005 [2024-07-15 14:00:01.483197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.005 [2024-07-15 14:00:01.483225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.483659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.483687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.484099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.484145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.484579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.484606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.485016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.485043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.485365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.485397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.485813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.485840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.486261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.486290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.486736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.486763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.487193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.487240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.487716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.487744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.488059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.488090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.488537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.488566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.488904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.488939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.489365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.489393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.489824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.489854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.490259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.490288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.490607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.490638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.490963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.490990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.491399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.491427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.491851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.491878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.492343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.492372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.492785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.492811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.493226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.493256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.493697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.493724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.494169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.494197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.494616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.006 [2024-07-15 14:00:01.494643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.006 qpair failed and we were unable to recover it. 00:29:35.006 [2024-07-15 14:00:01.494964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.494991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.495428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.495457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.495886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.495913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.496344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.496372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.496775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.496802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.497222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.497251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.497676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.497703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.498108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.498145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.498574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.498601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.499029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.499063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.499578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.499606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.499816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.499847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.500276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.500305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.500738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.500766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.501182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.501211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.501644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.501672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.502116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.502154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.502622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.502649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.503069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.503097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.503467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.503495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.503869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.503896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.504344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.504372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.504797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.504824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.505239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.505268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.505675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.505703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.506867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.506910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.007 [2024-07-15 14:00:01.507430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.007 [2024-07-15 14:00:01.507519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.007 qpair failed and we were unable to recover it. 00:29:35.008 [2024-07-15 14:00:01.508028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.008 [2024-07-15 14:00:01.508062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.008 qpair failed and we were unable to recover it. 00:29:35.008 [2024-07-15 14:00:01.508384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.008 [2024-07-15 14:00:01.508427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.008 qpair failed and we were unable to recover it. 00:29:35.008 [2024-07-15 14:00:01.508871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.008 [2024-07-15 14:00:01.508899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.008 qpair failed and we were unable to recover it. 00:29:35.008 [2024-07-15 14:00:01.509330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.008 [2024-07-15 14:00:01.509360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.008 qpair failed and we were unable to recover it. 00:29:35.008 [2024-07-15 14:00:01.509789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.008 [2024-07-15 14:00:01.509817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.008 qpair failed and we were unable to recover it. 00:29:35.008 [2024-07-15 14:00:01.510229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.008 [2024-07-15 14:00:01.510258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.008 qpair failed and we were unable to recover it. 00:29:35.008 [2024-07-15 14:00:01.510701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.008 [2024-07-15 14:00:01.510730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.008 qpair failed and we were unable to recover it. 00:29:35.008 [2024-07-15 14:00:01.511157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.008 [2024-07-15 14:00:01.511186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.008 qpair failed and we were unable to recover it. 00:29:35.273 [2024-07-15 14:00:01.511655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.273 [2024-07-15 14:00:01.511684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.273 qpair failed and we were unable to recover it. 00:29:35.273 [2024-07-15 14:00:01.512094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.512147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.512604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.512634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.513059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.513087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.513530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.513560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.513898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.513925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.514300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.514331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.516510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.516564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.516949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.516985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.517332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.517362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.517794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.517822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.518242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.518270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.518708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.518735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.519166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.519193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.519647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.519683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.519996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.520030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.520480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.520511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.520932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.520959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.521284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.521312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.521752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.521780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.522196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.522225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.522637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.522664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.523087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.523114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.523591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.523619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.524051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.524079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.524432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.524460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.524902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.524929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.525379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.525407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.525845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.525873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.526266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.526295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.526713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.526740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.527153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.527181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.527595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.527622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.528035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.528062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.528474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.528503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.528933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.528960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.529386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.529415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.529823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.529851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.530265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.530294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.530740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.530767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.531211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.531239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.531660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.531688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.532104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.532140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.532556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.532583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.533022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.533050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.533478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.533506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.533931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.533958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.534384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.534413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.534836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.534862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.535392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.535483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.535890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.535925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.536242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.536274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.536702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.536731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.537143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.537173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.537668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.537707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.538152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.538184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.538544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.538571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.539011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.539038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.539519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.539547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.539995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.540022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.540435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.540464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.540890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.540917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.541336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.541364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.541719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.541746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.542176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.542204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.542658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.542686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.543109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.543167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.543620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.543648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.544083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.544111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.544561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.544592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.545030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.545058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.545475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.545504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.545915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.545942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.546266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.546295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.546594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.546622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.274 qpair failed and we were unable to recover it. 00:29:35.274 [2024-07-15 14:00:01.547029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.274 [2024-07-15 14:00:01.547056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.547477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.547505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.547919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.547947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.548363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.548391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.548821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.548848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.549281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.549310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.549754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.549783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.550218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.550247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.550677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.550705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.551179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.551207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.551627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.551654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.552090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.552117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.552492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.552521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.552841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.552869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.553312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.553341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.553826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.553855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.554298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.554326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.554754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.554782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.555212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.555240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.555578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.555612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.556033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.556061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.556487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.556516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.556969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.556996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.557493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.557522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.557941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.557969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.558366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.558394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.558814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.558841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.559258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.559286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.559726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.559754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.560173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.560201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.560626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.560652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.561058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.561084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.561543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.561573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.562011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.562039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.562491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.562520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.562939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.562967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.563384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.563412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.563820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.563847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.564269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.564298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.564709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.564735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.565174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.565202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.565627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.565655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.566071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.566098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.566594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.566622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.567033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.567060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.567463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.567491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.567815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.567850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.568262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.568293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.568682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.568710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.569136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.569166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.569616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.569643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.570063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.570090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.570414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.570442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.570866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.570893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.571218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.571245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.571560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.571587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.572016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.572043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.572542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.572570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.573015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.573042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.573340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.573367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.573800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.573828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.574146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.574178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.574634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.574661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.575077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.575104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.575542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.575570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.575902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.575929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.576385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.576414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.576853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.576881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.577380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.577408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.577838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.577865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.578309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.578339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.578782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.578809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.579259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.275 [2024-07-15 14:00:01.579287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-15 14:00:01.579737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.579765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.580185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.580213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.580542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.580569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.581064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.581091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.581505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.581534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.581947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.581974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.582403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.582431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.582861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.582888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.583211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.583244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.583586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.583613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.584028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.584055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.584539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.584567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.584998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.585027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.585446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.585481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.585917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.585945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.586460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.586488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.586896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.586923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.587453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.587481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.587897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.587924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.588459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.588553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.589042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.589078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.589520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.589553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.589932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.589962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.590471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.590502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.590813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.590839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.591314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.591342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.591772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.591800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.592161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.592190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.592553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.592581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.592908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.592935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.593401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.593429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.593871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.593898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.594426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.594455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.594901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.594928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.595352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.595380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.595813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.595841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.596321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.596350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.596775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.596802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.597272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.597301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.597741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.597768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.598204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.598235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.598675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.598704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.599149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.599180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.599616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.599645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.600117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.600157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.600629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.600657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.601086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.601115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.601459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.601487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.601946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.601975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.602388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.602418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.602857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.602887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.603329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.603358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.603786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.603815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.604225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.604263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.604666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.604696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.605117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.605159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.605687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.605716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.606181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.606226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.606589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.606624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.607073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.607103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.607530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.607560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.607998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.608028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.608458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.608488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.608918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.276 [2024-07-15 14:00:01.608947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-15 14:00:01.609377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.609408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.609863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.609891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.610331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.610361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.610770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.610800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.611225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.611255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.611669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.611698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.612145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.612175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.612606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.612635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.613068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.613097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.613420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.613455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.613892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.613922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.614254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.614285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.614650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.614679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.615116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.615156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.615627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.615656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.616074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.616103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.616579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.616610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.617027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.617058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.617331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.617362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.617793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.617823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.618264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.618295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.618726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.618756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.619169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.619200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.619656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.619685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.619992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.620025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.620438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.620468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.620900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.620929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.621276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.621306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.621742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.621771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.622204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.622241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.622669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.622698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.623118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.623157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.623610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.623638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.624078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.624108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.624569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.624599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.625011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.625041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.625456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.625487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.625922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.625950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.626382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.626412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.626826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.626856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.627272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.627302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.627748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.627778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.628215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.628244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.628666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.628696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.629143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.629173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.629647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.629677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.629981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.630014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.630445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.630474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.630904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.630933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.631254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.631288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.631739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.631768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.632199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.632230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.632667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.632696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.633067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.633096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.633540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.633571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.633993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.634022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.634401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.634432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.634870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.634899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.635421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.635451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.635873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.635902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.636216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.636247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.636703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.636732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.637173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.637202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.637632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.637661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.638088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.638117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.638436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.638465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.638894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.638923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.639352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.639383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.639811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.639840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.640197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.640233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.640657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.640686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.641147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.641177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-15 14:00:01.641610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.277 [2024-07-15 14:00:01.641640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.642076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.642106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.642537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.642566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.642990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.643020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.643451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.643482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.643930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.643960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.644391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.644422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.644854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.644883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.645254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.645284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.645710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.645739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.646180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.646211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.646634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.646664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.647166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.647196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.647633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.647662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.648094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.648160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.648588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.648620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.649046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.649076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.649511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.649541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.649998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.650027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.650451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.650481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.650907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.650935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.651352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.651382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.651819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.651850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.652273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.652303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.652733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.652764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.653204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.653235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.653643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.653672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.654091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.654120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.654518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.654549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.655004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.655033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.655454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.655484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.655911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.655940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.656370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.656400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.656847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.656877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.657195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.657231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.657585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.657615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.658068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.658098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.658540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.658584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.659019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.659049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.659511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.659542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.659966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.659996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.660426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.660457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.660896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.660925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.661356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.661387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.661814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.661843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.662179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.662212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.662657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.662687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.663117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.663157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.663604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.663633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.664002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.664031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.664444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.664476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.664802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.664833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.665322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.665353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.665788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.665818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.278 qpair failed and we were unable to recover it. 00:29:35.278 [2024-07-15 14:00:01.666268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.278 [2024-07-15 14:00:01.666298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.666736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.666765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.667086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.667115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.667469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.667498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.667825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.667852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.668315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.668345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.668775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.668804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.669240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.669270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.669594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.669622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.670054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.670084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.670455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.670486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.670821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.670850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.671279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.671309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.671649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.671679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.672109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.672148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.672627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.672656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.673097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.673134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.673443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.673478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.673909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.673939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.674356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.674386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.674830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.674859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.675307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.675337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.675773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.675803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.676150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.676186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.676637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.676666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.677119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.677164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.677586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.677615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.677968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.677998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.678442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.678473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.678920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.678949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.679391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.679420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.679864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.679893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.680346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.680377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.680686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.680718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.681143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.681176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.681652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.681681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.682136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.682167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.682606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.682635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.682979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.683008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.683452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.683482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.683938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.683967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.684414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.684444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.684874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.684903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.685349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.685379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.685823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.685852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.686331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.686362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.686794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.686823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.687271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.687301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.687765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.687796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.688012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.688041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.688480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.688511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.688947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.688977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.689415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.689445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.689900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.689930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.690367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.690397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.690706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.690735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.279 [2024-07-15 14:00:01.691187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.279 [2024-07-15 14:00:01.691218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.279 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.691661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.691690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.692121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.692163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.692532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.692562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.692981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.693010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.693334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.693365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.693666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.693699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.694184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.694222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.694664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.694693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.695151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.695181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.695616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.695647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.696088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.696117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.696487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.696518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.696963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.696992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.697457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.697487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.697933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.697962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.698407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.698438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.698880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.698909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.699344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.699374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.699835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.699864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.700307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.700337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.700808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.700837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.701162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.701191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.701604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.701633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.702078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.702107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.702531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.702560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.703011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.703041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.703495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.703526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.703963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.703992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.704409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.704439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.704874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.704904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.705350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.705381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.705819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.705848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.706263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.706293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.706726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.706756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.707209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.707240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.707543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.707574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.708040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.708069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.708442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.708473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.708790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.708823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.709306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.709336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.709781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.709809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.710240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.710270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.710715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.710744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.711141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.711171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.711618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.711647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.712077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.712107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.712436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.712477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.712921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.712951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.713402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.713434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.713870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.713900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.714443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.714541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.715092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.715146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.715592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.715624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.716062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.716093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.716547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.716578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.716907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.716943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.718357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.718424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.718893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.718926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.719429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.719529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.720076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.720112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.720631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.720663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.721093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.721135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.721575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.721604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.721963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.721994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.722435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.722466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.722965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.722995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.723434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.723466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.723917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.723946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.724399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.724430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.724795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.724825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.725262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.725294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.725738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.725768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.726180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.726212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.726666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.726697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.727080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.727110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.280 qpair failed and we were unable to recover it. 00:29:35.280 [2024-07-15 14:00:01.727553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.280 [2024-07-15 14:00:01.727583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.728025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.728055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.728390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.728428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.728901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.728932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.729389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.729420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.729866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.729897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.730332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.730362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.730819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.730850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.731300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.731330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.731776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.731807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.732241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.732273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.732723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.732760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.733204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.733236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.733693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.733724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.734104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.734147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.734638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.734670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.735131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.735162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.735656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.735686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.736143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.736175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.736538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.736576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.737019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.737049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.737568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.737670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.738200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.738263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.738695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.738727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.739173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.739207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.739700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.739732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.740064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.740094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.740529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.740562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.741021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.741051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.741429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.741462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.741910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.741940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.742461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.742563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.743069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.743105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.743566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.743599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.744014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.744046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.744372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.744409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.744897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.744929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.745379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.745411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.745849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.745881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.746334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.746366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.746808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.746839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.747289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.747320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.747760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.747790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.748236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.748268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.748719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.748748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.749202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.749232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.749690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.749721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.750173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.750204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.750651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.750682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.751138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.751170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.751606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.751636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.752081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.752118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.752579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.752610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.753054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.753084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.753562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.753595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.754051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.754082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.754538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.754570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.755042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.755074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 [2024-07-15 14:00:01.755394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.281 [2024-07-15 14:00:01.755426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:35.281 qpair failed and we were unable to recover it. 00:29:35.281 Read completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Read completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Read completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Read completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Read completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Read completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Read completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Read completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Read completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Read completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Read completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Write completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Read completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Read completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Write completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Write completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Read completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Write completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Write completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Read completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Write completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Read completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Write completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Write completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Read completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Read completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Read completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.281 Write completed with error (sct=0, sc=8) 00:29:35.281 starting I/O failed 00:29:35.282 Write completed with error (sct=0, sc=8) 00:29:35.282 starting I/O failed 00:29:35.282 Read completed with error (sct=0, sc=8) 00:29:35.282 starting I/O failed 00:29:35.282 Read completed with error (sct=0, sc=8) 00:29:35.282 starting I/O failed 00:29:35.282 Write completed with error (sct=0, sc=8) 00:29:35.282 starting I/O failed 00:29:35.282 [2024-07-15 14:00:01.755776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.282 [2024-07-15 14:00:01.756283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.756305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.756600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.756612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.757102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.757115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.757604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.757663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.758100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.758115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.758609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.758669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.759158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.759199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.759487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.759502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.759932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.759946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.760348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.760362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.760782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.760795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.761351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.761410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.761848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.761865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.762282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.762295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.762648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.762662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.763091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.763103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.763511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.763525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.763929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.763942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.764431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.764490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.764938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.764954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.765462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.765522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.765963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.765978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.766481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.766540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.767022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.767038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.767449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.767464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.767882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.767897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.768429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.768489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.768922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.768936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.769461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.769520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.769997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.770014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.770445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.770459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.770872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.770886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.771408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.771467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.771912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.771927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.772462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.772521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.772845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.772862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.773275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.773289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.773693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.773706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.774115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.774136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.774461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.774473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.774902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.774915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.775415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.775474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.775927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.775941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.776450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.776510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.776872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.776887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.777291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.777305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.777657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.777671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.778105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.778119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.778535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.778548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.778991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.779005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.779429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.779442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.779693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.779712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.780155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.780171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.780580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.780601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.781006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.781019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.781449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.781462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.781881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.781894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.782303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.782317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.782763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.782776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.783240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.783255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.783673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.783686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.784086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.784098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.784529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.784542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.784953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.784965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.785482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.785538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.785985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.786000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.786426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.786440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.786855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.786869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.787292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.787306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.787578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.787592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.788017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.788030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.788333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.788345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.788766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.788778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.282 [2024-07-15 14:00:01.789218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.282 [2024-07-15 14:00:01.789231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.282 qpair failed and we were unable to recover it. 00:29:35.283 [2024-07-15 14:00:01.789649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.283 [2024-07-15 14:00:01.789662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.283 qpair failed and we were unable to recover it. 00:29:35.283 [2024-07-15 14:00:01.790068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.283 [2024-07-15 14:00:01.790082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.283 qpair failed and we were unable to recover it. 00:29:35.283 [2024-07-15 14:00:01.790508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.283 [2024-07-15 14:00:01.790520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.283 qpair failed and we were unable to recover it. 00:29:35.283 [2024-07-15 14:00:01.790905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.283 [2024-07-15 14:00:01.790918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.283 qpair failed and we were unable to recover it. 00:29:35.283 [2024-07-15 14:00:01.791325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.283 [2024-07-15 14:00:01.791339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.283 qpair failed and we were unable to recover it. 00:29:35.283 [2024-07-15 14:00:01.791749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.283 [2024-07-15 14:00:01.791761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.283 qpair failed and we were unable to recover it. 00:29:35.283 [2024-07-15 14:00:01.792029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.283 [2024-07-15 14:00:01.792042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.283 qpair failed and we were unable to recover it. 00:29:35.283 [2024-07-15 14:00:01.792463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.283 [2024-07-15 14:00:01.792476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.283 qpair failed and we were unable to recover it. 00:29:35.283 [2024-07-15 14:00:01.792880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.283 [2024-07-15 14:00:01.792894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.283 qpair failed and we were unable to recover it. 00:29:35.283 [2024-07-15 14:00:01.793299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.283 [2024-07-15 14:00:01.793313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.283 qpair failed and we were unable to recover it. 00:29:35.283 [2024-07-15 14:00:01.793689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.283 [2024-07-15 14:00:01.793705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.283 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.794014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.794030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.794462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.794477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.794950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.794962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.795353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.795366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.795765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.795778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.796186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.796201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.796632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.796646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.797066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.797080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.797830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.797863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.798274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.798294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.798711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.798724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.799161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.799176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.800211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.800240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.800659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.800675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.801093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.801107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.801542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.801558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.801961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.801975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.802382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.802397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.802661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.802677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.803101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.803116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.803432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.803446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.803856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.803869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.804280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.804294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.804728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.804741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.805143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.805156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.805579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.805592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.806014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.806026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.806321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.806334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.806743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.806756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.807164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.807178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.807412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.807428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.807846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.807859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.808258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.808272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.808695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.808708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.809136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.809149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.809536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.809549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.809994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.810011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.810432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.810446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.810846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.810859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.811278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.811292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.811689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.811701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.812135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.812149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.812581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.812593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.813013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.813026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.813464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.813476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.813738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.813752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.814173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.814186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.814627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.814640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.815086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.815098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.815507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.815521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.815940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.815953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.816390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.816404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.816806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.816819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.817225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.817238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.817534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.817546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.817959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.817971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.818370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.818383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.818783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.818795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.819211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.819224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.819637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.819651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.820055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.820067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.820470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.820484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.820899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.820911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.821285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.821299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.821708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.821720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.822133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.822148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.822617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.822630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.823050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.823064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.823547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.823600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.824014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.824030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.824443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.824457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.824878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.824891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.825401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.551 [2024-07-15 14:00:01.825455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.551 qpair failed and we were unable to recover it. 00:29:35.551 [2024-07-15 14:00:01.825911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.825927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.826439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.826492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.826922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.826938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.827430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.827482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.827744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.827767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.828181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.828195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.828510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.828522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.828921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.828934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.829347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.829362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.829775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.829787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.830210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.830224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.830621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.830635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.831021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.831034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.831452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.831464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.831875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.831887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.832317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.832329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.832752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.832765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.833170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.833184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.833578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.833591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.833849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.833862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.834267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.834280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.834701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.834714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.835138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.835152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.835577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.835589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.836009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.836023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.836435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.836448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.836778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.836792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.837228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.837242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.837642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.837655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.838053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.838067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.838369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.838382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.838781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.838801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.839206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.839219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.839603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.839615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.840082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.840095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.840487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.840501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.840903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.840915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.841369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.841382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.841803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.841815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.842211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.842223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.842640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.842653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.843055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.843069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.843466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.843479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.843910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.843924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.844325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.844338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.844738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.844751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.845139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.845153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.845549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.845562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.845965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.845978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.846379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.846392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.846808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.846821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.847218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.847230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.847632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.847645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.848047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.848060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.848372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.848387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.848787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.848800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.849206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.849220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.849529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.849542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.849928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.849942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.850335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.850348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.850750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.850762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.851161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.851173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.851596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.851610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.852052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.852064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.852486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.852498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.852900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.852914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.853330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.853343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.853738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.853750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.854155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.854168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.854570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.854583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.854957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.854970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.855369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.855381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.855782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.855796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.856034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.856050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.856458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.856470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.856866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.856880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.857280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.857292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.857584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.552 [2024-07-15 14:00:01.857596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.552 qpair failed and we were unable to recover it. 00:29:35.552 [2024-07-15 14:00:01.857991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.858003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.858388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.858401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.858801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.858813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.859230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.859243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.859627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.859640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.860029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.860041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.860436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.860448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.860849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.860861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.861283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.861297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.861621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.861633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.861855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.861868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.862292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.862304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.862723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.862736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.863060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.863073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.863474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.863486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.863826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.863840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.864256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.864268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.864666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.864678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.865010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.865021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.865330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.865342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.865727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.865741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.866136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.866151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.866550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.866562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.866960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.866972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.867389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.867401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.867794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.867808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.868065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.868078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.868491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.868504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.868919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.868931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.869329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.869343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.869789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.869802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.870194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.870206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.870622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.870635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.871029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.871042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.871513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.871525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.871907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.871921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.872342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.872354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.872748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.872760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.873157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.873170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.873583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.873595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.874015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.874028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.874442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.874454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.874851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.874863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.875176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.875188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.875580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.875592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.876024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.876036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.876436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.876448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.876849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.876861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.877281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.877295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.877690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.877704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.878099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.878112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.553 [2024-07-15 14:00:01.878532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.553 [2024-07-15 14:00:01.878544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.553 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.878964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.878977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.879461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.879509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.879987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.880001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.880393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.880406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.880816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.880828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.881227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.881240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.881651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.881664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.882080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.882092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.882512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.882524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.882921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.882933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.883415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.883468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.883893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.883907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.884376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.884423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.884823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.884839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.885239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.885253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.885655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.885668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.886083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.886097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.886417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.886429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.886843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.886856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.887251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.887264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.887632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.887645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.888039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.888051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.888288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.888303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.888652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.888665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.889086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.889098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.889533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.889545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.889940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.889951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.890341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.890355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.890770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.890782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.891099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.891110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.891558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.891571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.891959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.891971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.892463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.892510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.892903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.892917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.893357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.893404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.893811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.893826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.894120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.894142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.894610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.894622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.895042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.895054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.895540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.895589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.896007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.896022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.896433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.896447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.896845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.896857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.897347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.897394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.897628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.897644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.898054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.898067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.898465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.898478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.898781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.898792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.899168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.899180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.899524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.899538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.899969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.899982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.554 qpair failed and we were unable to recover it. 00:29:35.554 [2024-07-15 14:00:01.900377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.554 [2024-07-15 14:00:01.900391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.900675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.900686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.901108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.901120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.901545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.901557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.901952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.901964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.902469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.902516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.902918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.902932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.903442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.903489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.903895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.903910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.904741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.904770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.905204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.905218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.905680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.905692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.905953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.905966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.906377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.906389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.906802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.906816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.907348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.907366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.907766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.907779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.908149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.908161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.908566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.908578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.908977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.908989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.909309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.909322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.909719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.909731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.910111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.910133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.910513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.910524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.910919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.910930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.911413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.911457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.911842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.911856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.912245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.912263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.912670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.912681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.913075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.913087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.913316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.913330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.913656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.913668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.914096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.914108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.914511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.914523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.914989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.915001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.915299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.915311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.915607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.915617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.915942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.915955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.916379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.916391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.916785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.916796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.917204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.917216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.917618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.917629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.917943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.917956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.918236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.918248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.918642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.918653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.919068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.919079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.919478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.919490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.919886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.919898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.920342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.920356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.920735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.920746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.921137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.921149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.921350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.555 [2024-07-15 14:00:01.921363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.555 qpair failed and we were unable to recover it. 00:29:35.555 [2024-07-15 14:00:01.921747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.921758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.922193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.922205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.922508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.922519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.922813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.922824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.923115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.923132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.923554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.923565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.923898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.923910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.924269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.924281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.924674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.924685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.925102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.925113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.925504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.925515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.925911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.925922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.926250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.926261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.926659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.926671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.927061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.927071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.927457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.927469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.927765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.927778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.928149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.928162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.928577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.928589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.928986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.928998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.929390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.929402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.929657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.929667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.930050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.930060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.930450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.930461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.930856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.930867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.931272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.931283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.931659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.931670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.931946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.931956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.932353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.932365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.932785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.932798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.933189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.933201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.933597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.933608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.933995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.934006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.934330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.934342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.934753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.934764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.935157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.935169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.935478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.935490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.935870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.935881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.936268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.936280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.936664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.936675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.937076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.937086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.937444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.937456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.937848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.937860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.938266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.938280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.556 [2024-07-15 14:00:01.938667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.556 [2024-07-15 14:00:01.938678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.556 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.939092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.939104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.939481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.939493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.939888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.939899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.940316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.940328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.940738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.940749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.941137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.941149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.941551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.941562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.941955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.941966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.942363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.942375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.942676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.942687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.943084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.943096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.943430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.943442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.943855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.943866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.944256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.944268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.944659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.944670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.945064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.945075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.945475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.945486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.945876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.945887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.946273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.946285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.946675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.946686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.947098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.947110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.947523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.947535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.947929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.947940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.948417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.948461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.948879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.948892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.949403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.949446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.949854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.949868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.950360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.950403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.950736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.950749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.951141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.951154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.951562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.951573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.952015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.952025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.952426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.952438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.952829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.952840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.953220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.953232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.953633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.953644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.954091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.954102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.954493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.954506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.954902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.954914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.955360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.955377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.955790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.955801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.956189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.956201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.956666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.956677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.957093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.957104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.957364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.957374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.957707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.957717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.958114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.958128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.958515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.958526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.958938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.958949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.959411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.959426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.959812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.959824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.960314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.960358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.960781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.960795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.961207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.961220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.961637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.961648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.962041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.962052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.962518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.557 [2024-07-15 14:00:01.962529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.557 qpair failed and we were unable to recover it. 00:29:35.557 [2024-07-15 14:00:01.962722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.962733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.963148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.963159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.963543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.963555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.963929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.963940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.964249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.964260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.964660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.964671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.965072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.965083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.965473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.965485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.965874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.965886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.966279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.966293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.966687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.966698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.967117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.967136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.967531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.967542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.967937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.967948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.968441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.968484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.968896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.968910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.969405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.969450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.969834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.969849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.970250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.970263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.970480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.970495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.970783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.970795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.971193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.971205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.971598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.971610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.972025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.972037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.972466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.972478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.972864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.972875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.973268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.973280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.973680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.973692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.974082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.974094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.974493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.974505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.974902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.974914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.975354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.975365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.975759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.975769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.976164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.976175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.976436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.976448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.976868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.976879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.977254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.977265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.977616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.977628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.977913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.977925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.978396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.978408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.978788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.978799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.979183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.979195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.979489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.979500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.979882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.979893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.980274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.980286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.980659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.980670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.981062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.981074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.981497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.981508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.981969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.981980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.982353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.982365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.982825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.982838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.983365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.983408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.983807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.983820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.984107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.984119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.984582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.984594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.984974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.984985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.985465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.985508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.985914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.985928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.986463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.986504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.986850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.986864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.987346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.987389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.987678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.987691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.988100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.988112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.988536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.988550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.988954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.988967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.989450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.989493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.989977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.989992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.990388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.990430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.990823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.990838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.991307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.991349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.991761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.991775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.992189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.992202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.992594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.992606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.993004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.993015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.993378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.993391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.993796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.993807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.994199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.994211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.994608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.994623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.558 [2024-07-15 14:00:01.995038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.558 [2024-07-15 14:00:01.995049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.558 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:01.995436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:01.995448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:01.995833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:01.995845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:01.996233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:01.996245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:01.996636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:01.996647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:01.997066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:01.997076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:01.997464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:01.997475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:01.997872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:01.997883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:01.998278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:01.998290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:01.998704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:01.998714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:01.999101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:01.999112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:01.999498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:01.999510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:01.999901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:01.999911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.000405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.000448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.000844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.000858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.001256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.001268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.001681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.001692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.002093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.002105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.002398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.002413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.002803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.002815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.003209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.003221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.003636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.003647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.004034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.004045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.004500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.004512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.004909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.004920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.005331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.005343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.005732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.005745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.005968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.005984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.006377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.006390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.006839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.006851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.007386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.007437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.007723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.007736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.008143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.008155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.008590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.008601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.008990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.009001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.009394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.009406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.009768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.009779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.010179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.010191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.010613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.010624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.011020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.011032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.011426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.011443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.011784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.011796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.012172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.012183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.012594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.012605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.013009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.013020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.013387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.013399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.013790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.013801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.014199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.014211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.014598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.014609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.014975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.014986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.015297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.015308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.015697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.015708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.016173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.016185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.016581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.016593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.016974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.016986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.017392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.017404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.017798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.017810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.018206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.018219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.018618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.018629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.019026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.019037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.019487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.019500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.019879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.019889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.020364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.020375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.020753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.020764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.021061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.021071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.021535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.021546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.021935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.021946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.022439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.022481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.022854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.022868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.023427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.023469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.023884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.023898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.024247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.024259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.024571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.559 [2024-07-15 14:00:02.024582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.559 qpair failed and we were unable to recover it. 00:29:35.559 [2024-07-15 14:00:02.024964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.024975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.025329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.025341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.025775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.025786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.026181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.026192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.026577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.026588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.026980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.026991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.027380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.027392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.027783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.027794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.028172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.028185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.028576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.028588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.029002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.029014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.029426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.029438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.029848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.029860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.030253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.030264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.030674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.030684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.031084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.031094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.031571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.031583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.031961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.031972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.032460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.032502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.032793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.032807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.033185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.033197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.033589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.033601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.034019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.034031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.034336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.034347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.034719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.034730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.035118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.035133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.035532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.035543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.035928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.035940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.036344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.036356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.036749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.036760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.037160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.037172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.037585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.037596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.037857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.037868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.038251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.038262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.038681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.038692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.039080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.039093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.039402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.039414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.039721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.039732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.040140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.040151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.040566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.040578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.040997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.041009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.041304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.041316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.041747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.041759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.042154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.042166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.042573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.042584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.042898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.042909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.043308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.043319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.043709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.043721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.044135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.044147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.044424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.044435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.044821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.044832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.045228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.045239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.045625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.045636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.046030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.046042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.046509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.046521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.046910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.046922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.047238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.047250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.047507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.047517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.047910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.047921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.048312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.048323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.048647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.048659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.049028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.049039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.049453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.049465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.049751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.049761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.050133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.050145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.050587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.050598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.050997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.051007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.051391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.051402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.051770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.051781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.052168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.052180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.052590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.052601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.052996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.053008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.053412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.053424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.053609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.053619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.560 [2024-07-15 14:00:02.053890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.560 [2024-07-15 14:00:02.053900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.560 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.054112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.054130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.054506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.054517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.054904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.054915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.055216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.055227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.055628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.055638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.056047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.056058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.056493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.056504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.056896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.056908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.057302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.057314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.057727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.057738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.058130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.058141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.058539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.058550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.058943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.058954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.059435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.059476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.059892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.059905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.060405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.060447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.060846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.060859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.061389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.061431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.061837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.061850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.062243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.062255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.062662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.062673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.063082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.063093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.063379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.063391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.063798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.063810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.064272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.064284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.064666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.064677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.065092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.065104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.065487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.065499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.065889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.065905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.066327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.066339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.066563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.066578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.067007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.067018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.067408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.067419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.561 [2024-07-15 14:00:02.067829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.561 [2024-07-15 14:00:02.067841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.561 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.068142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.068158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.068578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.068590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.068823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.068834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.069223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.069234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.069586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.069598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.070010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.070021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.070442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.070453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.070732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.070742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.071038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.071054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.071283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.071294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.071689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.071699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.071883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.071893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.072274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.072286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.072673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.072684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.073091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.073103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.073490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.073502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.073918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.073931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.074355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.074367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.074782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.074793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.075210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.075223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.075637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.075648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.075964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.075975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.076377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.076388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.076803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.076814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.077203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.077214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.077614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.077625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.078032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.078043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.078435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.078446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.078858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.078869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.079130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.079142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.079518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.079529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.079934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.079946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.080430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.080470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.080859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.080872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.835 [2024-07-15 14:00:02.081391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.835 [2024-07-15 14:00:02.081432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.835 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.081834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.081852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.082259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.082271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.082664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.082675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.083067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.083078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.083382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.083392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.083770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.083781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.084195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.084206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.084598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.084610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.085021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.085032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.085315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.085326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.085716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.085726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.086251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.086266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.086652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.086664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.087047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.087059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.087456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.087468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.087871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.087882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.088273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.088284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.088669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.088679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.089066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.089077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.089474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.089486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.089895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.089906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.090333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.090345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.090729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.090740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.091119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.091145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.091523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.091534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.091921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.091932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.092416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.092456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.092847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.092865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.093387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.093427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.093838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.093852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.094249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.094261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.094666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.094677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.094905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.094920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.095309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.095320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.095730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.095742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.096144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.096157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.096541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.096552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.096978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.096989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.097463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.097505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.097908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.097921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.098487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.098526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.098919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.098932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.099434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.099473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.099672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.099686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.100109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.100121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.100532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.100543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.100939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.100950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.101476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.101515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.101963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.101977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.102362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.102401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.102815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.102829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.103308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.103348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.103792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.103804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.104187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.104199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.104558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.104569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.104967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.104978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.105378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.105389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.105761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.105771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.106059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.106069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.106457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.106469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.106880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.106892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.107240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.107251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.107654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.107664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.108056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.108066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.108483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.108493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.108949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.108960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.109442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.109481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.109881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.109894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.110353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.110397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.110693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.110708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.111102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.111113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.111547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.111558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.111972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.111983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.112458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.112497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.112895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.112908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.113403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.113442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.113851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.836 [2024-07-15 14:00:02.113864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.836 qpair failed and we were unable to recover it. 00:29:35.836 [2024-07-15 14:00:02.114348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.114387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.114683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.114698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.115088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.115100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.115509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.115521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.115911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.115923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.116406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.116445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.116841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.116854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.117268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.117280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.117661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.117672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.118050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.118061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.118414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.118426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.118838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.118849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.119105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.119117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.119527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.119539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.119763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.119776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.120214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.120225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.120486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.120496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.120930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.120941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.121330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.121345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.121766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.121777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.122164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.122175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.122579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.122589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.122980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.122990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.123363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.123374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.123764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.123775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.124182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.124193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.124585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.124597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.125016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.125027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.125316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.125329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.125789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.125800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.126191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.126202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.126621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.126632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.127016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.127028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.127422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.127434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.127898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.127909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.128286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.128298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.128704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.128716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.129107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.129119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.129504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.129515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.129929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.129939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.130415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.130453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.130851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.130864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.131160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.131171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.131557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.131567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.131954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.131965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.132389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.132400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.132784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.132795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.133209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.133221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.133534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.133544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.133813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.133823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.134218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.134229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.134365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.134377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.134778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.134789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.135007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.135021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.135371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.135382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.135792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.135803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.136196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.136207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.136600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.136610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.137038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.137048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.137346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.137359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.137744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.137755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.138143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.138153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.138543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.138554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.138810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.138821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.139204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.139215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.139606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.139618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.140004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.140016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.140417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.140428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.140819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.140830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.141269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.837 [2024-07-15 14:00:02.141280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.837 qpair failed and we were unable to recover it. 00:29:35.837 [2024-07-15 14:00:02.141578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.141590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.141948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.141958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.142334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.142345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.142737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.142748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.143141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.143152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.143581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.143592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.143990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.144001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.144407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.144418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.144809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.144820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.145238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.145249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.145641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.145651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.145904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.145915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.146303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.146314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.146726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.146737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.147120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.147135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.147509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.147520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.147911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.147923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.148327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.148337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.148723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.148733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.149125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.149136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.149524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.149535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.149937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.149948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.150315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.150354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.150766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.150780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.151280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.151319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.151728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.151742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.152213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.152225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.152514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.152524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.152925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.152935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.153354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.153365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.153744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.153755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.154051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.154063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.154450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.154461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.154870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.154881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.155265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.155276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.155673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.155684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.156072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.156083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.156397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.156407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.156807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.156818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.157213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.157224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.157627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.157637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.158053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.158063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.158417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.158429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.158709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.158720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.159107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.159119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.159514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.159524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.159910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.159921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.160312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.160323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.160703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.160714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.160976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.160986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.161364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.161375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.161675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.161687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.161960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.161970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.162377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.162388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.162782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.162792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.163177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.163188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.163611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.163621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.164028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.164042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.164435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.164446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.164838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.164849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.165245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.165255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.165670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.165680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.166096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.166107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.166535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.166546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.166765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.166779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.167161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.167175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.167484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.167493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.167891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.167902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.168283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.168294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.168707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.168717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.169097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.169108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.169494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.169506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.169893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.169903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.170309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.170320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.170708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.170718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.171102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.171113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.171404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.171415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.171814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.171825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.172210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.838 [2024-07-15 14:00:02.172221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.838 qpair failed and we were unable to recover it. 00:29:35.838 [2024-07-15 14:00:02.172611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.172621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.172937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.172948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.173341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.173352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.173746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.173756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.174139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.174150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.174539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.174549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.174848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.174858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.175253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.175264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.175657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.175668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.176056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.176066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.176403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.176414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.176799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.176809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.177205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.177216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.177614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.177625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.177920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.177932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.178371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.178382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.178633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.178645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.179036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.179047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.179335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.179346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.179734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.179745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.180135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.180147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.180535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.180545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.180994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.181005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.181386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.181397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.181785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.181795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.182173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.182184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.182599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.182609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.182998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.183008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.183422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.183434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.183749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.183760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.184181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.184192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.184572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.184582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.184976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.184987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.185372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.185383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.185814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.185825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.186209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.186220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.186589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.186600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.186985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.186997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.187264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.187275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.187673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.187683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.188071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.188082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.188470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.188482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.188892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.188904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.189159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.189171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.189577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.189587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.189900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.189912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.190326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.190339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.190724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.190734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.191128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.191139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.191500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.191510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.191926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.191937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.192416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.192454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.192875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.192887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.193353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.193392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.193799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.193811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.194201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.194213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.194603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.194614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.195005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.195019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.195332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.195343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.195655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.195668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.196061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.196073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.196461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.196472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.196885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.196896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.197284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.197295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.197715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.197726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.198099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.198111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.198407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.198419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.198795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.198806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.199192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.199204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.199592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.199603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.200012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.200023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.200432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.200443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.200836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.200847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.201159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.201171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.201572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.201583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.201968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.201979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.202183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.202193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.202538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.839 [2024-07-15 14:00:02.202548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.839 qpair failed and we were unable to recover it. 00:29:35.839 [2024-07-15 14:00:02.202939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.202950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.203324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.203335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.203723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.203734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.204121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.204138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.204456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.204467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.204879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.204890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.205275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.205286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.205701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.205711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.206144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.206154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.206539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.206552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.206834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.206845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.207063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.207077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.207424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.207435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.207855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.207866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.208253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.208265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.208679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.208691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.209128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.209140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.209530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.209543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.209851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.209863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.210255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.210266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.210524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.210535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.210951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.210962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.211392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.211403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.211799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.211809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.212242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.212253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.212641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.212652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.212867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.212878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.213271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.213282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.213706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.213716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.214131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.214142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.214532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.214542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.214930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.214940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.215498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.215536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.215943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.215956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.216304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.216343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.216761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.216775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.217169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.217185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.217483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.217493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.217881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.217892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.218280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.218291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.218678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.218688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.218977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.218989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.219419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.219431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.219818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.219831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.220222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.220233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.220643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.220654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.221036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.221047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.221444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.221457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.221847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.221859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.222202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.222213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.222603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.222615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.223006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.223017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.223429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.223440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.223850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.223861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.224231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.224241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.224518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.224530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.224845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.224855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.225271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.225282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.225707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.225717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.226106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.226117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.226564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.226575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.226995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.227006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.227394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.227406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.227702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.227714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.228158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.228170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.228465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.228476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.228878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.228889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.229111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.229135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.229553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.229564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.229947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.229958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.230360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.230371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.230767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.230777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.231168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.840 [2024-07-15 14:00:02.231179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.840 qpair failed and we were unable to recover it. 00:29:35.840 [2024-07-15 14:00:02.231590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.231601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.231993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.232003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.232427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.232439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.232835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.232846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.233227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.233244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.233646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.233657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.234051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.234062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.234443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.234454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.234884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.234894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.235283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.235293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.235692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.235703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.236109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.236120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.236531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.236543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.236950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.236961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.237408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.237448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.237844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.237857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.238374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.238413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.238710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.238724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.239132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.239144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.239609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.239620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.240036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.240047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.240447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.240459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.240857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.240867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.241366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.241404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.241820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.241834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.242345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.242384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.242780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.242793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.243077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.243089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.243480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.243493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.243877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.243889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.244271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.244282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.244532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.244547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.244931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.244941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.245341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.245352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.245818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.245829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.246218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.246229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.246656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.246667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.247055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.247066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.247463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.247475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.247858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.247869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.248173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.248185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.248588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.248599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.248924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.248935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.249323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.249333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.249778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.249789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.250171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.250182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.250606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.250616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.251012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.251023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.251418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.251430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.251818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.251829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.252136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.252148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.252444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.252455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.252849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.252860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.253254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.253265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.253677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.253688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.254070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.254080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.254393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.254405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.254668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.254679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.255059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.255069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.255456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.255467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.255853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.255864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.256329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.256340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.256656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.256668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.257024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.257035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.257305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.257316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.257705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.257715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.258104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.258114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.258527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.258537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.258796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.258808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.259202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.259213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.259512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.259523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.259920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.259931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.260317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.260331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.260718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.260729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.261002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.261013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.841 [2024-07-15 14:00:02.261412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.841 [2024-07-15 14:00:02.261423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.841 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.261848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.261859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.262245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.262257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.262644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.262655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.263043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.263054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.263437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.263448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.263856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.263868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.264255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.264267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.264660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.264671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.265083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.265094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.265518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.265531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.265943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.265955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.266278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.266292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.266695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.266706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.267096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.267107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.267503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.267514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.267909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.267921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.268327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.268365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.268816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.268830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.269146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.269159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.269526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.269537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.269933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.269944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.270365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.270377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.270768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.270780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.271039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.271056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.271361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.271372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.271787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.271798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.272185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.272196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.272597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.272609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.272998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.273010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.273420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.273432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.273835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.273846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.274242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.274254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.274473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.274485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.274911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.274923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.275317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.275329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.275602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.275613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.275939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.275950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.276362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.276373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.276758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.276769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.277163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.277175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.277579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.277589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.278000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.278011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.278410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.278424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.278818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.278829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.279217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.279229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.279640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.279652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.280039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.280051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.280444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.280456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.280862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.280872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.281274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.281285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.281692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.281703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.282094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.282105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.282396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.282409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.282845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.282857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.283243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.283254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.283653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.283664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.284046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.284057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.284432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.284444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.284830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.284841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.285139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.285149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.285525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.285536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.285961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.285972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.286273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.286285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.286584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.286595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.286986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.286999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.287386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.287397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.842 [2024-07-15 14:00:02.287784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.842 [2024-07-15 14:00:02.287795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.842 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.288129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.288140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.288593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.288604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.288994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.289006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.289431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.289443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.289851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.289863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.290389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.290428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.290688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.290703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.290953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.290966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.291380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.291393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.291803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.291815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.292196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.292208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.292524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.292535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.292950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.292961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.293360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.293372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.293784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.293795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.294069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.294080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.294280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.294293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.294581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.294592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.295008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.295020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.295331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.295342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.295732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.295744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.295929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.295939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.296212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.296223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.296607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.296617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.297006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.297016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.297461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.297472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.297832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.297843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.298227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.298239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.298552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.298564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.298937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.298949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.299118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.299143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.299508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.299518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.299975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.299986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.300373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.300384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.300793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.300803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.301343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.301381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.301694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.301707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.302003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.302014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.302425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.302436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.302843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.302855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.303236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.303247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.303624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.303636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.304032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.304044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.304446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.304458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.304921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.304933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.305314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.305326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.305679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.305690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.306141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.306153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.306534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.306546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.306936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.306947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.307304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.307316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.307712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.307724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.308117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.308134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.308647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.308659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.308976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.308988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.309416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.309455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.309747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.309761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.310165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.310179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.310616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.310627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.311013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.311025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.311344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.311356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.311504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.311518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.311894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.311905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.312322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.312334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.312745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.312758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.313174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.313192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.313494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.313505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.313819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.313830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.314251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.314263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.314524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.314536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.314933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.314945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.315359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.315371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.315764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.315775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.843 [2024-07-15 14:00:02.316168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.843 [2024-07-15 14:00:02.316179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.843 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.316593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.316604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.316982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.316993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.317504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.317516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.317900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.317910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.318033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.318047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.318453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.318466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.318833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.318844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.319149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.319161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.319570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.319582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.319881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.319893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.320279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.320291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.320705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.320716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.320926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.320939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.321208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.321221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.321697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.321710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.322103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.322115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.322663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.322676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.323067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.323078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.323459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.323470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.323728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.323740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.324153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.324165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.324551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.324562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.324970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.324982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.325262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.325274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.325681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.325692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.326074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.326086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.326518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.326530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.326917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.326929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.327466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.327505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.327792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.327805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.328218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.328230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.328599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.328610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.329000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.329011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.329384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.329396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.329687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.329698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.330093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.330103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.330538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.330549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.330803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.330814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.331230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.331242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.331635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.331646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.332044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.332054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.332505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.332515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.332804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.332815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.333101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.333112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.333506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.333518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.333943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.333954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.334366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.334377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.334609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.334619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.335008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.335018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.335428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.335439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.335607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.335620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.336046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.336057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.336331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.336342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.336728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.336738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.337134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.337145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.337464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.337474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.337786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.337797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.338182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.338194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.338605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.338615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.339009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.339022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.339348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.339360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.339665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.339676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.340069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.340080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.340305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.340316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.340716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.340727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.341132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.341144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.341519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.341529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.341925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.341935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.342342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.844 [2024-07-15 14:00:02.342353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.844 qpair failed and we were unable to recover it. 00:29:35.844 [2024-07-15 14:00:02.342743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.342753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.343141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.343153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.343533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.343543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.343819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.343830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.344238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.344249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.344709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.344719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.345072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.345083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.345446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.345457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.345738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.345748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.346155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.346167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.346563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.346574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.346870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.346880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.347138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.347148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.347457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.347469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.347752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.347763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.348172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.348183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.348459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.348469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.348885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.348895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.349402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.349413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.349794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.349805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.350195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.350207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.350532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.350542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.350970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.350980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.351366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.351377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.351755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.351765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.352157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.352168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.352619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.352630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.353004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.353015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.353415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.353428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.353846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.353857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.354289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.354300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:35.845 [2024-07-15 14:00:02.354599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.845 [2024-07-15 14:00:02.354612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:35.845 qpair failed and we were unable to recover it. 00:29:36.115 [2024-07-15 14:00:02.354946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.115 [2024-07-15 14:00:02.354958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.115 qpair failed and we were unable to recover it. 00:29:36.115 [2024-07-15 14:00:02.355347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.115 [2024-07-15 14:00:02.355359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.115 qpair failed and we were unable to recover it. 00:29:36.115 [2024-07-15 14:00:02.355748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.115 [2024-07-15 14:00:02.355759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.115 qpair failed and we were unable to recover it. 00:29:36.115 [2024-07-15 14:00:02.356182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.115 [2024-07-15 14:00:02.356193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.115 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.356589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.356600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.357014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.357025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.357285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.357296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.357709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.357720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.358093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.358103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.358507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.358519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.358910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.358921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.359272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.359283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.359608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.359619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.360034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.360044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.360409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.360420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.360796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.360807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.361198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.361209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.361505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.361516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.361912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.361922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.362279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.362290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.362676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.362687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.363083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.363094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.363444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.363455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.363759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.363771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.364194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.364205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.364598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.364608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.365076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.365089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.365485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.365496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.365881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.365892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.366273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.366284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.366692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.366703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.367114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.367131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.367383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.367394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.367810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.367821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.368231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.368242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.368587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.368597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.368987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.369000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.369304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.369315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.369645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.369656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.370070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.370081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.370476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.370487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.370775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.370785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.371175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.371186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.371579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.116 [2024-07-15 14:00:02.371590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.116 qpair failed and we were unable to recover it. 00:29:36.116 [2024-07-15 14:00:02.371976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.371986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.372384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.372394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.372674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.372685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.373002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.373013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.373315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.373325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.373717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.373729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.374016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.374026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.374466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.374477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.374866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.374876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.375206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.375220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.375641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.375651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.375945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.375957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.376271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.376282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.376682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.376692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.376802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.376814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.377167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.377178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.377594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.377605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.377852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.377863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.378318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.378329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.378722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.378733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.378956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.378966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.379244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.379254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.379663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.379674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.379950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.379965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.380253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.380264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.380654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.380665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.381060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.381071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.381461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.381474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.381861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.381873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.382270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.382281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.382673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.382683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.383092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.383102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.383570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.383581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.117 [2024-07-15 14:00:02.383990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.117 [2024-07-15 14:00:02.384000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.117 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.384426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.384465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.384869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.384881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.385392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.385430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.385835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.385848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.386200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.386213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.386583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.386594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.386982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.386994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.387399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.387411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.387798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.387809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.388387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.388425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.388821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.388833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.389221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.389233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.389522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.389533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.389918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.389928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.390260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.390272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.390670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.390680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.391074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.391098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.391498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.391509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.391905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.391916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.392221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.392233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.392632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.392643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.393007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.393018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.393408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.393419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.393816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.393826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.394115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.394136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.394537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.118 [2024-07-15 14:00:02.394548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.118 qpair failed and we were unable to recover it. 00:29:36.118 [2024-07-15 14:00:02.394975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.394987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.395475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.395513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.395748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.395762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.396117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.396139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.396536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.396548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.396946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.396957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.397343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.397382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.397821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.397834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.398344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.398382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.398681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.398695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.399092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.399103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.399508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.399520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.399906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.399917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.400425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.400463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.400873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.400885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.401301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.401312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.401701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.401712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.401933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.401948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.402258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.402270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.402579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.402591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.402976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.402987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.403276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.403287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.403670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.403681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.404063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.404073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.404480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.404490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.404890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.404900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.119 [2024-07-15 14:00:02.405247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.119 [2024-07-15 14:00:02.405258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.119 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.405668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.405679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.406076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.406088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.406397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.406409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.406709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.406721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.407005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.407019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.407234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.407244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.407610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.407621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.408015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.408026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.408380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.408391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.408698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.408708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.409077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.409087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.409484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.409495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.409890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.409901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.410176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.410188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.410608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.410618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.411003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.411013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.411333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.411345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.411737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.411748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.412147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.412158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.412565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.412576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.412949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.412961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.413273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.413285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.413675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.413686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.413978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.413990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.414287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.414298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.414689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.414700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.414987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.120 [2024-07-15 14:00:02.414997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.120 qpair failed and we were unable to recover it. 00:29:36.120 [2024-07-15 14:00:02.415388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.415399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.415809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.415820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.416208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.416220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.416507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.416517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.416961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.416973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.417332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.417343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.417637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.417648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.418051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.418062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.418451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.418463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.418759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.418771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.419159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.419170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.419634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.419644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.420027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.420037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.420436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.420447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.420850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.420861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.421260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.421271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.421672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.421683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.422090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.422101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.422499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.422511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.422815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.422826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.121 qpair failed and we were unable to recover it. 00:29:36.121 [2024-07-15 14:00:02.423159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.121 [2024-07-15 14:00:02.423179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.423590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.423600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.423894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.423904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.424320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.424332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.424658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.424670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.425073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.425084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.425485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.425496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.425813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.425824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.426252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.426263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.426463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.426474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.426836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.426848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.427024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.427035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.427434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.427446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.427834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.427845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.428136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.428147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.428438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.428449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.428834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.428845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.429177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.429189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.429605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.429615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.430008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.430019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.430429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.430440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.430788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.430800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.431199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.431210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.431595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.431606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.432000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.432010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.432426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.432440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.432825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.122 [2024-07-15 14:00:02.432837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.122 qpair failed and we were unable to recover it. 00:29:36.122 [2024-07-15 14:00:02.433218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.433230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.433687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.433697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.434081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.434093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.434493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.434505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.434888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.434899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.435277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.435287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.435707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.435717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.436094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.436105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.436500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.436512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.436902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.436913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.437430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.437469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.437888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.437901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.438461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.438500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.438923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.438937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.439289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.439302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.439697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.439708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.440101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.440111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.440503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.440515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.440906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.440919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.441426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.441464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.441744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.441756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.442156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.442168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.442655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.442667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.443040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.443051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.443394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.443405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.443796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.443807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.444218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.123 [2024-07-15 14:00:02.444229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.123 qpair failed and we were unable to recover it. 00:29:36.123 [2024-07-15 14:00:02.444539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.444550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.444980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.444991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.445246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.445257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.445624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.445634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.445910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.445921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.446175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.446187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.446635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.446646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.447057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.447068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.447274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.447285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.447706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.447717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.448190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.448201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.448598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.448609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.448924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.448936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.449334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.449345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.449737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.449748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.450128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.450140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.450514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.450525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.450921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.450932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.451498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.451536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.451962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.451975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.452419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.452458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.452738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.452752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.453139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.453152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.453308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.453320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.453726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.453737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.454038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.454049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.454455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.454467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.454818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.124 [2024-07-15 14:00:02.454829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.124 qpair failed and we were unable to recover it. 00:29:36.124 [2024-07-15 14:00:02.455249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.455261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.455663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.455674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.455886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.455897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.456268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.456279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.456635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.456646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.457038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.457048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.457414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.457426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.457771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.457783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.458168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.458179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.458510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.458522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.458908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.458919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.459231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.459246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.459651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.459662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.460049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.460060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.460455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.460466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.460880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.460891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.461278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.461289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.461676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.461687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.462080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.462090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.462402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.462414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.462808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.462819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.463241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.463252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.463650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.463661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.463924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.463937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.464252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.464264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.464669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.464681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.465090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.465101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.465580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.465591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.125 qpair failed and we were unable to recover it. 00:29:36.125 [2024-07-15 14:00:02.465978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.125 [2024-07-15 14:00:02.465989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.466462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.466500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.466906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.466919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.467350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.467389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.467869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.467883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.468391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.468430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.468843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.468856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.469280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.469291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.469687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.469698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.470095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.470106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.470504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.470517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.470887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.470898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.471364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.471402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.471802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.471815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.472190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.472202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.472637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.472648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.473037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.473048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.473446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.473457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.473853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.473863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.474281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.474293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.474715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.474727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.475109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.475126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.475436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.475447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.475875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.475886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.476368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.476410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.476841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.476854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.477257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.477269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.477674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.477686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.478091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.478102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.478504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.478516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.478834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.478846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.479141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.479153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.479565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.479575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.479965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.479975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.480378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.480389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.126 [2024-07-15 14:00:02.480611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.126 [2024-07-15 14:00:02.480625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.126 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.480956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.480966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.481273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.481286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.481585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.481596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.481988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.481998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.482258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.482270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.482649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.482659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.483052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.483063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.483474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.483485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.483878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.483889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.484079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.484091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.484508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.484521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.484971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.484983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.485393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.485431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.485788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.485802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.486202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.486213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.486636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.486651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.487043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.487054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.487275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.487286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.487696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.487707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.488115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.488131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.488534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.488545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.488964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.488975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.489460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.489499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.489895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.489907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.490431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.490470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.490825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.490839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.127 [2024-07-15 14:00:02.491130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.127 [2024-07-15 14:00:02.491141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.127 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.491485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.491496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.491892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.491903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.492427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.492465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.492865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.492878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.493406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.493444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.493840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.493853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.494368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.494407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.494772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.494785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.495176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.495187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.495583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.495594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.495985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.495996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.496414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.496426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.496728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.496741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.497141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.497154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.497462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.497473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.497885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.497895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.498038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.498048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.498423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.498434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.498829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.498839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.499182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.499193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.499626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.499637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.500052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.500063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.500457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.500469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.500859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.500871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.501252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.501264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.501661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.128 [2024-07-15 14:00:02.501673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.128 qpair failed and we were unable to recover it. 00:29:36.128 [2024-07-15 14:00:02.502112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.502136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.502563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.502574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.502987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.502997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.503411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.503454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.503869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.503882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.504342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.504379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.504691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.504704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.505000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.505011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.505388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.505400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.505791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.505802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.506179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.506190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.506582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.506594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.507011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.507023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.507414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.507425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.507721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.507733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.508131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.508142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.508540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.508552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.508953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.508965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.509351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.509362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.509751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.509762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.510192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.510203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.510635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.510645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.511071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.511082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.511472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.511483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.511887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.511897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.512323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.512335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.512741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.129 [2024-07-15 14:00:02.512752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.129 qpair failed and we were unable to recover it. 00:29:36.129 [2024-07-15 14:00:02.513047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.513059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.513417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.513428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.513726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.513736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.514057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.514071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.514374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.514385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.514777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.514787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.515200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.515211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.515667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.515677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.516056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.516067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.516468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.516479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.516740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.516750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.517187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.517199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.517603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.517613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.518004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.518015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.518511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.518523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.518917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.518928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.519323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.519334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.519662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.519674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.520062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.520073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.520386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.520398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.520783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.520794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.521197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.521209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.521643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.521653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.522067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.522078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.522549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.522561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.522950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.522962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.523380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.130 [2024-07-15 14:00:02.523391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.130 qpair failed and we were unable to recover it. 00:29:36.130 [2024-07-15 14:00:02.523728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.523739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.524032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.524042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.524463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.524475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.524889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.524899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.525327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.525338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.525743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.525753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.526147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.526158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.526432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.526443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.526707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.526717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.527128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.527141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.527606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.527618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.528032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.528043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.528237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.528250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.528665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.528676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.529044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.529057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.529304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.529315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.529575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.529586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.529966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.529979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.530376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.530387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.530804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.530815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.531159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.531170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.531556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.531567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.531947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.531958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.532373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.532385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.532802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.532813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.533191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.533203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.533510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.131 [2024-07-15 14:00:02.533520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.131 qpair failed and we were unable to recover it. 00:29:36.131 [2024-07-15 14:00:02.533841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.533853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.534133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.534144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.534447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.534457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.534712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.534723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.535017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.535028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.535392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.535404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.535758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.535770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.536157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.536169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.536592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.536604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.537007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.537017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.537427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.537438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.537726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.537738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.538032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.538042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.538423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.538434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.538813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.538824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.539218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.539229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.539533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.539543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.539889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.539902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.540325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.540336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.540719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.540730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.541112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.541131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.541332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.541342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.541680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.541691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.541797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.541810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.542108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.542120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.542515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.542526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 14:00:02.542859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.132 [2024-07-15 14:00:02.542870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.543226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.543238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.543533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.543543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.543924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.543934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.544230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.544241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.544625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.544636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.545025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.545036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.545440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.545450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.545838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.545849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.546240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.546251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.546664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.546674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.547087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.547098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.547244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.547257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.547612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.547623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.548014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.548026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.548381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.548393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.548807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.548818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.549106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.549117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.549504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.549515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.549925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.549937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.550349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.550360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.550758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.550768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.551078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.551089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.551497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.551507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.551884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.551896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.552359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.552397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.552801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.552815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.553235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.133 [2024-07-15 14:00:02.553247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 14:00:02.553646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.553657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.553960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.553970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.554263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.554275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.554674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.554686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.555072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.555088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.555378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.555390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.555783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.555794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.556055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.556066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.556460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.556471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.556853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.556864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.557138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.557150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.557583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.557594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.557901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.557911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.558367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.558378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.558841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.558852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.559149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.559160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.559441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.559452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.559861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.559872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.560233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.560245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.560681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.560692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.560992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.561004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.561408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.134 [2024-07-15 14:00:02.561420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.134 qpair failed and we were unable to recover it. 00:29:36.134 [2024-07-15 14:00:02.561728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.561739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.562133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.562145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.562561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.562571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.562838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.562848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.563166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.563176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.563306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.563319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.563718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.563729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.563996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.564007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.564382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.564394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.564771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.564782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.565167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.565179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.565563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.565573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.565890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.565900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.566258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.566269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.566665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.566676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.567065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.567076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.567314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.567324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.567751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.567761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.568151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.568162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.568586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.568597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.568893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.568903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.569286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.569296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.569687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.569698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.570083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.570095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.570345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.570357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.570765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.570776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.571129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.571141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.571403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.571415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.571786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.571797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.572106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.135 [2024-07-15 14:00:02.572116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.135 qpair failed and we were unable to recover it. 00:29:36.135 [2024-07-15 14:00:02.572427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.572438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.572662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.572675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.573138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.573150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.573549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.573560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.573842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.573853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.574242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.574253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.574647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.574657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.574952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.574964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.575210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.575221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.575642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.575652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.576034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.576044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.576327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.576338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.576726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.576737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.577131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.577142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.577488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.577498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.577871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.577882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.578209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.578221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.578585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.578597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.579005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.579016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.579288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.579301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.579705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.579720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.580113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.580129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.580506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.580517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.580927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.580938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.581251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.581261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.581550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.581560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.581952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.581963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.582245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.582256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.582651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.582661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.583046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.583056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.583279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.583292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.583676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.583689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.584111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.584128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.584510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.584521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.584817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.584828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.585235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.136 [2024-07-15 14:00:02.585246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.136 qpair failed and we were unable to recover it. 00:29:36.136 [2024-07-15 14:00:02.585642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.585653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.586045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.586056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.586463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.586475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.586858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.586869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.587155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.587166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.587565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.587576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.587964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.587974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.588367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.588377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.588755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.588765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.589214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.589225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.589599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.589610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.589927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.589938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.590219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.590230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.590618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.590630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.591021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.591032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.591511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.591523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.591911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.591923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.592320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.592331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.592749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.592759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.593171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.593182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.593572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.593582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.593860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.593870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.594238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.594249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.594624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.594634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.594932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.594943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.595198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.595210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.595544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.595557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.595776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.595787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.596056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.596067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.596457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.596468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.596930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.596941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.597324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.597335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.597723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.597734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.598128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.598139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.598413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.598424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.598842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.598853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.599241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.599252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.599534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.599546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.599998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.600010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.600382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.600394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.600781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.600793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.601127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.601139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.601552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.601563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.601975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.137 [2024-07-15 14:00:02.601986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.137 qpair failed and we were unable to recover it. 00:29:36.137 [2024-07-15 14:00:02.602523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.602562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.602853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.602867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.603264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.603276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.603664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.603676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.603968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.603979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.604402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.604414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.604809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.604820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.605362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.605401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.605613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.605629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.606027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.606039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.606461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.606473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.606855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.606866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.607278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.607289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.607558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.607568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.607954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.607965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.608407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.608419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.608836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.608848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.609270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.609281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.609713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.609724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.609966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.609977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.610391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.610404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.610819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.610830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.611116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.611147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.611460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.611471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.611855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.611865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.612367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.612406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.612807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.612820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.613129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.613142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.613590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.613601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.613805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.613819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.614103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.614114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.614413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.614425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.614815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.614827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.615080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.615090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.615307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.615318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.615707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.615717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.616108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.616119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.616517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.616528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.617023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.617033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.617436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.617447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.617866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.617877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-15 14:00:02.618395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.138 [2024-07-15 14:00:02.618434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.618860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.618873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.619344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.619382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.619778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.619791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.620197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.620209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.620600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.620612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.620988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.620999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.621417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.621429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.621809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.621824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.622342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.622381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.622785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.622798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.623119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.623138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.623590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.623601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.623919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.623930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.624351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.624390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.624708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.624722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.625130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.625142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.625529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.625539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.625947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.625958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.626380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.626418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.626821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.626833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.627349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.627387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.627801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.627814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.628209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.628221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.628703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.628713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.629101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.629110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.629419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.629429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.629808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.629817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.630162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.630172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.630585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.630594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.139 [2024-07-15 14:00:02.631007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.139 [2024-07-15 14:00:02.631015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.139 qpair failed and we were unable to recover it. 00:29:36.411 [2024-07-15 14:00:02.631512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.411 [2024-07-15 14:00:02.631525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.411 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.631926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.631936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.632331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.632343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.632714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.632726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.633013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.633028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.633413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.633425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.633815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.633827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.634074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.634085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.634386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.634398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.634798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.634809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.635283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.635294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.635713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.635725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.636018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.636029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.636286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.636297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.636702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.636714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.637131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.637143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.637445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.637456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.637855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.637866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.638266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.638278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.638656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.638668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.638891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.638906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.639255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.639267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.639549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.639561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.640020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.640033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.640319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.640331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.640734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.640745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.641141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.641153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.641446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.641458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.641795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.641807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.642221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.642232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.642603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.642615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.642990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.643002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.643266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.643281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.643564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.643576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.643987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.643999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.644224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.644237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.644530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.644542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.644934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.644945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.645344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.645356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.645809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.645821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.412 [2024-07-15 14:00:02.646115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.412 [2024-07-15 14:00:02.646134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.412 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.646568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.646580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.646891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.646903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.647277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.647288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.647695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.647706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.647998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.648012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.648426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.648437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.648817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.648828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.649126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.649137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.649497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.649507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.649906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.649918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.650462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.650501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.650902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.650914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.651438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.651477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.651867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.651879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.652398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.652436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.652822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.652835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.653221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.653233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.653499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.653509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.653867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.653878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.654280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.654291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.654580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.654593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.655001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.655012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.655502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.655513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.655906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.655917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.656215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.656225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.656626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.656636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.656913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.656924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.657193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.657203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.657622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.657632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.658016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.658027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.658339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.658351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.658709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.658721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.659105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.659116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.659550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.659563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.659942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.659953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.660362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.660374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.660757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.660768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.413 [2024-07-15 14:00:02.661206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.413 [2024-07-15 14:00:02.661217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.413 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.661577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.661587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.661928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.661940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.662159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.662174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.662586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.662596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.662957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.662967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.663360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.663370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.663754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.663765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.664055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.664067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.664484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.664496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.664810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.664822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.665231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.665242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.665684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.665695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.666110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.666125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.666388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.666399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.666785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.666796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.667084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.667095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.667397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.667407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.667807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.667818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.668189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.668200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.668585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.668597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.669008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.669019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.669260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.669271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.669533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.669545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.669828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.669838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.670205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.670216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.670614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.670625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.671034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.671044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.671419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.671430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.671841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.671851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.672229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.672241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.672659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.672670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.672842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.672853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.673012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.673023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.673475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.673487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.673876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.673890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.674183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.674194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.674649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.674660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.675043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.675054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.675447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.675458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.675910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.675921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.414 [2024-07-15 14:00:02.676232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.414 [2024-07-15 14:00:02.676243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.414 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.676624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.676634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.677023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.677033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.677413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.677424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.677800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.677811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.678201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.678211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.678600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.678611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.679058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.679069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.679461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.679473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.679760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.679771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.680158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.680169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.680572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.680583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.680992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.681002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.681383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.681394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.681781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.681792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.682179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.682190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.682599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.682610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.682986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.682996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.683332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.683344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.683732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.683742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.684162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.684173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.684425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.684435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.684825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.684835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.685222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.685233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.685513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.685523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.685907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.685918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.686306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.686317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.686722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.686732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.687142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.687153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.687466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.687477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.687873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.687883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.688266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.688276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.688668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.688680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.689138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.689150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.689528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.689538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.689920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.689931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.690303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.690315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.690692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.690702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.691115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.691130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.691489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.691500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.691871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.691881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.415 qpair failed and we were unable to recover it. 00:29:36.415 [2024-07-15 14:00:02.692273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.415 [2024-07-15 14:00:02.692284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.692651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.692663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.693046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.693057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.693442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.693453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.693739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.693749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.694135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.694146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.694583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.694593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.694927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.694938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.695223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.695233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.695495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.695505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.695907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.695918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.696197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.696208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.696598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.696609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.696992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.697004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.697392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.697404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.697799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.697810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.698193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.698204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.698480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.698490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.698647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.698659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.699068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.699078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.699453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.699464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.699848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.699862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.700258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.700269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.700653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.700664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.701046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.701057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.701451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.701462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.701841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.701852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.702045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.702055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.702328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.702339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.702695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.702706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.703092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.703103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.703412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.703424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.703758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.703769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.704042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.704053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.416 qpair failed and we were unable to recover it. 00:29:36.416 [2024-07-15 14:00:02.704488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.416 [2024-07-15 14:00:02.704500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.704881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.704893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.705155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.705166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.705537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.705547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.705937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.705947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.706173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.706185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.706591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.706602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.706994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.707005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.707307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.707318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.707742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.707753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.708140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.708151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.708579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.708590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.708981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.708993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.709378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.709391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.709779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.709790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.710187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.710199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.710632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.710644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.711047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.711059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.711513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.711525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.711890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.711901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.712182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.712194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.712608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.712620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.712906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.712918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.713252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.713264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.713652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.713664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.714053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.714065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.714542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.714554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.714944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.714956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.715413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.715425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.715805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.715817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.716328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.716368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.716667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.716682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.717068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.717081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.717468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.717480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.717827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.717839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.718216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.718228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.718536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.718548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.718947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.718958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.719349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.719361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.719749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.719760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.720148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.417 [2024-07-15 14:00:02.720160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.417 qpair failed and we were unable to recover it. 00:29:36.417 [2024-07-15 14:00:02.720551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.720563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.720990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.721002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.721249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.721263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.721572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.721584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.721873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.721885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.722274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.722286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.722697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.722708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.723006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.723017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.723446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.723457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.723837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.723848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.724001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.724013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.724293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.724305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.724726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.724738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.725135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.725147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.725521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.725536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.725922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.725933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.726248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.726259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.726563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.726574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.726966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.726977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.727250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.727260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.727642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.727652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.728034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.728045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.728322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.728335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.728754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.728765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.729184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.729195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.729587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.729597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.729827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.729837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.730236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.730247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.730635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.730646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.730920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.730931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.731328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.731339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.731730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.731740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.732135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.732146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.732565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.732577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.732780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.732794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.733188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.733199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.733599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.733609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.733994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.734005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.734439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.734452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.734750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.734762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.735163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.418 [2024-07-15 14:00:02.735183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.418 qpair failed and we were unable to recover it. 00:29:36.418 [2024-07-15 14:00:02.735611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.735622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.736013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.736025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.736281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.736294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.736689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.736700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.737130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.737142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.737527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.737538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.737833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.737845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.738250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.738262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.738661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.738672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.739070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.739081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.739479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.739490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.739870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.739881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.740257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.740270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.740686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.740697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.741008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.741022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.741326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.741338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.741740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.741752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.742144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.742156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.742565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.742576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.742971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.742981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.743271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.743281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.743581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.743592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.744091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.744103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.744569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.744581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.744999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.745010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.745359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.745371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.745767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.745778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.746196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.746208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.746699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.746711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.746927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.746940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.747274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.747285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.747695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.747706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.748105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.748117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.748437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.748447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.748836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.748847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.749280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.749291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.749687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.749697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.750091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.750103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.750495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.750506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.750915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.750927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.751423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.751461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.419 qpair failed and we were unable to recover it. 00:29:36.419 [2024-07-15 14:00:02.751876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.419 [2024-07-15 14:00:02.751894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.752372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.752411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.752794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.752807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.753271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.753283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.753647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.753658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.754031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.754042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.754407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.754419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.754801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.754813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.755202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.755213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.755670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.755680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.756096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.756107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.756491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.756502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.756798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.756810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.757200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.757211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.757563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.757575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.757847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.757858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.758315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.758326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.758718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.758729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.759140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.759152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.759558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.759569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.759961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.759971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.760373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.760384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.760767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.760778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.761177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.761188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.761550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.761561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.761856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.761867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.762162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.762173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.762548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.762559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.762956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.762968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.763274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.763286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.763599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.763611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.763993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.764004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.764390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.764401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.764746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.764757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.765217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.765228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.765630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.420 [2024-07-15 14:00:02.765641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.420 qpair failed and we were unable to recover it. 00:29:36.420 [2024-07-15 14:00:02.766069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.766080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.766355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.766366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.766630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.766642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.766929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.766941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.767338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.767349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.767813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.767827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.768210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.768221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.768600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.768610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.768995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.769006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.769429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.769439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.769858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.769868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.770162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.770173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.770568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.770579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.770970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.770981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.771377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.771388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.771778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.771789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.772082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.772093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.772500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.772513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.772883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.772894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.773168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.773179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.773545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.773556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.773937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.773948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.774323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.774334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.774728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.774739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.775156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.775167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.775583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.775594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.776002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.776013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.776401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.776412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.776802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.776813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.777104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.777116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.777433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.777444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.777840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.777851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.778242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.778255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.778678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.778689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.779105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.779116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.779550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.779561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.779786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.779801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.780230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.780241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.421 [2024-07-15 14:00:02.780653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.421 [2024-07-15 14:00:02.780665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.421 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.781055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.781067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.781297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.781308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.781660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.781671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.782074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.782085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.782485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.782497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.782901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.782911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.783232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.783243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.783624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.783636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.783965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.783976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.784370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.784381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.784766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.784777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.785131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.785142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.785528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.785538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.785932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.785942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.786347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.786385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.786769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.786782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.787354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.787393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.787854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.787867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.788252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.788264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.788643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.788654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.789041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.789053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.789356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.789368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.789763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.789775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.790068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.790079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.790507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.790518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.790910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.790922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.791325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.791338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.791633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.791645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.792033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.792044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.792371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.792384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.792779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.792790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.793213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.793224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.793630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.793641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.794038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.794049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.794456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.794470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.794767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.794778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.795159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.795170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.795437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.795448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.795839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.795849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.422 qpair failed and we were unable to recover it. 00:29:36.422 [2024-07-15 14:00:02.796226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.422 [2024-07-15 14:00:02.796237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.796624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.796635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.796820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.796830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.797232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.797243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.797650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.797661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.797958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.797970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.798262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.798273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.798655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.798665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.799036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.799047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.799459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.799471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.799925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.799936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.800339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.800351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.800725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.800736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.801120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.801136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.801544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.801555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.801938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.801950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.802436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.802475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.802874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.802886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.803405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.803443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.803850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.803863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.804375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.804412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.804837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.804850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.805336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.805352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.805738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.805748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.806165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.806177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.806500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.806511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.806909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.806920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.807216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.807227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.807683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.807694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.808079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.808091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.808492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.808504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.808801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.808811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.809205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.809216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.809613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.809623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.810004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.810015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.810412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.810422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.810798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.810810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.811206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.811217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.811563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.811574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.811968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.811980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.812320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.812331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.423 [2024-07-15 14:00:02.812640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.423 [2024-07-15 14:00:02.812651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.423 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.812929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.812939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.813320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.813331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.813701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.813713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.814094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.814106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.814507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.814519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.814708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.814719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.814999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.815011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.815291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.815303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.815713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.815724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.816167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.816178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.816508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.816528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.816926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.816936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.817275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.817294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.817699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.817709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.817999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.818010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.818403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.818413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.818782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.818793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.819209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.819220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.819623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.819634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.819945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.819956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.820285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.820296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.820703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.820716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.821089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.821099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.821495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.821506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.821796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.821806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.822200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.822212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.822631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.822642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.823052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.823063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.823375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.823387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.823839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.823850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.824289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.824300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.824691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.824702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.825096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.825107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.825505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.825517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.825931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.825942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.826474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.826513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.826918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.826931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.827345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.827385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.827684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.827697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.828117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.828137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.424 qpair failed and we were unable to recover it. 00:29:36.424 [2024-07-15 14:00:02.828430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.424 [2024-07-15 14:00:02.828441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.828833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.828844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.829135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.829147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.829627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.829638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.829959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.829971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.830458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.830496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.830791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.830805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.831102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.831113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.831350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.831361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.831714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.831726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.832135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.832146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.832450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.832460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.832760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.832771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.833054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.833065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.833342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.833353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.833752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.833762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.834144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.834154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.834551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.834561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.834855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.834866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.835291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.835303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.835699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.835710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.836102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.836112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.836506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.836518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.836912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.836923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.837320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.837332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.837726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.837736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.838014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.838025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.838279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.838291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.838687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.838698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.839090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.839102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.839419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.839431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.839817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.839828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.840228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.840239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.840648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.840659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.841068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.841078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.425 qpair failed and we were unable to recover it. 00:29:36.425 [2024-07-15 14:00:02.841432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.425 [2024-07-15 14:00:02.841443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.841702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.841714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.842104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.842115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.842555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.842565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.842953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.842964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.843393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.843432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.843841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.843855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.844082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.844095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.844429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.844441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.844836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.844847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.845241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.845253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.845673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.845684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.846084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.846095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.846499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.846511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.846902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.846918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.847431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.847470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.847890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.847903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.848412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.848451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.848845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.848858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.849143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.849156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.849588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.849599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.849980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.849991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.850375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.850386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.850789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.850799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.851223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.851234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.851623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.851635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.851924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.851935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.852345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.852357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.852742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.852753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.853011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.853023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.853429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.853440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.853827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.853838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.854207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.854218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.854514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.854526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.854910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.854921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.855359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.855370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.855761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.855772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.856170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.856181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.856586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.856596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.426 [2024-07-15 14:00:02.856993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.426 [2024-07-15 14:00:02.857005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.426 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.857416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.857427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.857720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.857732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.858035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.858046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.858492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.858503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.858889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.858900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.859199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.859210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.859497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.859507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.859792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.859804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.860205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.860216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.860504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.860515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.860902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.860913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.861195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.861206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.861628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.861640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.862050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.862062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.862327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.862338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.862731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.862746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.863008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.863018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.863404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.863414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.863804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.863815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.864212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.864223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.864618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.864629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.864918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.864929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.865328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.865340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.865756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.865767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.866191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.866202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.866691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.866701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.866998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.867008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.867298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.867309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.867582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.867593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.868054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.868066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.868418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.868429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.868845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.868857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.869131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.869143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.869468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.869479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.869785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.869797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.870215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.870226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.870616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.870628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.870988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.870999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.871412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.871424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.871814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.871826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.427 qpair failed and we were unable to recover it. 00:29:36.427 [2024-07-15 14:00:02.872242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.427 [2024-07-15 14:00:02.872254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.872637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.872648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.873059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.873072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.873390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.873401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.873789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.873800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.874188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.874200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.874660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.874670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.875074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.875086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.875407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.875418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.875819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.875830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.876256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.876267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.876754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.876764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.877146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.877158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.877554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.877565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.877971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.877982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.878342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.878352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.878708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.878719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.879117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.879132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.879534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.879544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.879943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.879954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.880387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.880425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.880825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.880837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.881229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.881242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.881542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.881554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.881946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.881957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.882331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.882342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.882741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.882751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.883159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.883170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.883551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.883562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.883961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.883972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.884378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.884389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.884699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.884710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.885092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.885103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.885502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.885513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.885797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.885809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.886185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.886196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.886624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.886635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.887004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.887015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.887275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.887287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.887696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.428 [2024-07-15 14:00:02.887707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.428 qpair failed and we were unable to recover it. 00:29:36.428 [2024-07-15 14:00:02.888106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.888117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.888394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.888405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.888691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.888702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.888987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.889001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.889291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.889302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.889705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.889716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.890097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.890108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.890498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.890509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.890890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.890900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.891363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.891374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.891760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.891771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.892174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.892186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.892680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.892691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.893081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.893092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.893412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.893424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.893836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.893847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.894132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.894143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.894509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.894520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.894906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.894917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.895299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.895310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.895697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.895708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.896130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.896141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.896516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.896527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.896824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.896835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.897219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.897230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.897610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.897620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.898001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.898013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.898213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.898225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.898628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.898639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.899038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.899049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.899364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.899378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.899787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.899799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.900173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.900184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.900565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.900575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.900965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.429 [2024-07-15 14:00:02.900977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.429 qpair failed and we were unable to recover it. 00:29:36.429 [2024-07-15 14:00:02.901445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.901456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.901836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.901847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.902239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.902250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.902656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.902668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.903071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.903083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.903478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.903489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.903878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.903888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.904230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.904241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.904624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.904634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.905023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.905034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.905420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.905431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.905821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.905831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.906247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.906258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.906447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.906457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.906860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.906872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.907290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.907301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.907695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.907705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.908103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.908114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.908531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.908542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.908924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.908934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.909339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.909350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.909738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.909748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.909966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.909980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.910355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.910366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.910766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.910777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.911183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.911194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.911473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.911483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.911730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.911741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.912150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.912162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.912581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.912592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.912992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.913002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.913306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.913318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.913733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.913743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.914137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.914149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.914530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.914541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.914931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.914942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.915342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.915355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.915644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.915656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.916051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.916062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.916459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.916470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.430 [2024-07-15 14:00:02.916854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.430 [2024-07-15 14:00:02.916866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.430 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.917181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.917193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.917542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.917554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.917986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.917997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.918405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.918416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.918812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.918822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.919210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.919220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.919626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.919636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.920018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.920029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.920441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.920452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.920869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.920880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.921256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.921268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.921651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.921662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.922043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.922054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.922537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.922549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.922941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.922951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.923351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.923362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.923743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.923754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.924139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.924150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.924573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.924584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.925003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.925014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.925375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.925387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.431 [2024-07-15 14:00:02.925671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.431 [2024-07-15 14:00:02.925682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.431 qpair failed and we were unable to recover it. 00:29:36.702 [2024-07-15 14:00:02.926093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.702 [2024-07-15 14:00:02.926108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.702 qpair failed and we were unable to recover it. 00:29:36.702 [2024-07-15 14:00:02.926421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.702 [2024-07-15 14:00:02.926435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.702 qpair failed and we were unable to recover it. 00:29:36.702 [2024-07-15 14:00:02.926824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.702 [2024-07-15 14:00:02.926836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.702 qpair failed and we were unable to recover it. 00:29:36.702 [2024-07-15 14:00:02.927135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.702 [2024-07-15 14:00:02.927147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.702 qpair failed and we were unable to recover it. 00:29:36.702 [2024-07-15 14:00:02.927518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.702 [2024-07-15 14:00:02.927529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.702 qpair failed and we were unable to recover it. 00:29:36.702 [2024-07-15 14:00:02.927938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.702 [2024-07-15 14:00:02.927948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.702 qpair failed and we were unable to recover it. 00:29:36.702 [2024-07-15 14:00:02.928186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.928196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.928622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.928633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.929024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.929034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.929432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.929443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.929793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.929804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.930198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.930208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.930500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.930512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.930727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.930740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.931067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.931078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.931462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.931473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.931883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.931894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.932286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.932297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.932733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.932744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.933137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.933148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.933448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.933458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.933736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.933746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.934172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.934183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.934587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.934597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.934990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.935001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.935303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.935315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.935747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.935757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.936146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.936157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.936554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.936565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.936959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.936969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.937374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.937386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.937583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.937594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.937944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.937955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.938115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.938130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.938561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.938572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.938957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.938968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.703 [2024-07-15 14:00:02.939456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.703 [2024-07-15 14:00:02.939466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.703 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.939855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.939865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.940364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.940402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.940839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.940852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.941132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.941144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.941521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.941539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.941924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.941935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.942427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.942465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.942867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.942879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.943341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.943379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.943819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.943832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.944397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.944436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.944671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.944686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.945059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.945070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.945352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.945364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.945765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.945776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.946162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.946173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.946590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.946600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.947011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.947022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.947354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.947367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.947767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.947778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.948180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.948191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.948608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.948620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.949010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.949020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.949488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.949500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.949883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.949894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.950223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.950235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.950662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.950673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.951071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.951082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.951377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.951389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.704 qpair failed and we were unable to recover it. 00:29:36.704 [2024-07-15 14:00:02.951792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.704 [2024-07-15 14:00:02.951803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.952179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.952190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.952577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.952588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.952883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.952894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.953282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.953293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.953684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.953694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.954086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.954098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.954453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.954464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.954869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.954879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.955320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.955331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.955723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.955733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.956140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.956152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.956548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.956559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.956949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.956960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.957362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.957373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.957679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.957690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.958145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.958156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.958560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.958570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.958884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.958895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.959206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.959217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.959611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.959621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.959886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.959898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.960328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.960339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.960731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.960742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.961153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.961164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.961522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.961533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.961922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.961933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.962221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.962232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.962458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.962471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.962892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.962903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.963334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.963345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.963734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.963745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.964114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.964130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.705 [2024-07-15 14:00:02.964525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.705 [2024-07-15 14:00:02.964535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.705 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.964945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.964956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.965282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.965294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.965699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.965710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.966133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.966145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.966522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.966533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.966931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.966942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.967340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.967351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.967833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.967843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.968254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.968266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.968598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.968611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.969021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.969033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.969426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.969438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.969624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.969637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.970025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.970037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.970409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.970419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.970810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.970821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.971115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.971131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.971526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.971537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.971956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.971967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.972256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.972267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.972701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.972712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.973094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.973104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.973504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.973515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.973904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.973914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.974349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.974387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.974793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.974806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.975218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.975231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.975631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.975642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.976029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.976041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.976328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.706 [2024-07-15 14:00:02.976340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.706 qpair failed and we were unable to recover it. 00:29:36.706 [2024-07-15 14:00:02.976706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.976717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.977020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.977032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.977279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.977290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.977658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.977668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.978042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.978054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.978451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.978462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.978844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.978856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.979265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.979276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.979637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.979648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.979942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.979954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.980233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.980244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.980669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.980680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.980975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.980986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.981383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.981394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.981690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.981699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.982080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.982091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.982397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.982408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.982816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.982827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.983118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.983135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.983498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.983508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.983865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.983876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.984376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.984414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.984805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.984818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.985210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.985223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.985659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.985669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.986055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.986066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.986289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.986301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.986702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.986712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.987006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.987016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.987429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.987440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.987838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.707 [2024-07-15 14:00:02.987848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.707 qpair failed and we were unable to recover it. 00:29:36.707 [2024-07-15 14:00:02.988241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.988253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.988675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.988685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.988988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.988999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.989423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.989435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.989806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.989817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.990199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.990210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.990636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.990647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.990937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.990947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.991350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.991361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.991774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.991784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.992174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.992185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.992598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.992609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.992997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.993007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.993455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.993465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.993845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.993856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.994247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.994258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.994668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.994681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.994975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.994986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.995278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.995288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.995570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.995581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.995992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.996002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.996309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.996320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.996651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.708 [2024-07-15 14:00:02.996662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.708 qpair failed and we were unable to recover it. 00:29:36.708 [2024-07-15 14:00:02.996944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:02.996955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:02.997290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:02.997301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:02.997689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:02.997700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:02.998090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:02.998100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:02.998510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:02.998520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:02.998910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:02.998921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:02.999325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:02.999338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:02.999730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:02.999740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.000129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.000141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.000345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.000355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.000622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.000632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.001020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.001031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.001427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.001438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.001725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.001736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.002136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.002147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.002549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.002561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.002954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.002964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.003267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.003278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.003655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.003665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.004046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.004056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.004270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.004281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.004687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.004697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.005110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.005121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.005517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.005528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.005915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.005925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.006246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.006257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.006671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.006681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.007066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.007077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.007413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.007424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.007844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.007854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.008146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.008157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.008537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.008547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.008938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.008948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.009272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.009283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.709 [2024-07-15 14:00:03.009686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.709 [2024-07-15 14:00:03.009698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.709 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.010083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.010093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.010498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.010510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.010897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.010908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.011280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.011290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.011713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.011723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.012134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.012144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.012536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.012547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.012952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.012963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.013350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.013360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.013749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.013760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.014147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.014159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.014575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.014585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.014840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.014850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.015319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.015329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.015711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.015722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.016132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.016143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.016506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.016517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.016897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.016908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.017314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.017325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.017699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.017709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.017998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.018008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.018303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.018315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.018698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.018709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.019117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.019132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.019573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.019584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.019974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.019984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.020371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.020386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.020672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.020683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.021104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.021114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.021451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.021461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.021853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.021863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.022269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.022280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.022658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.022668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.023055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.023066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.023459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.023470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.023880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.710 [2024-07-15 14:00:03.023890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.710 qpair failed and we were unable to recover it. 00:29:36.710 [2024-07-15 14:00:03.024164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.024174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.024538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.024549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.024936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.024947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.025342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.025352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.025736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.025747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.026137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.026149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.026554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.026564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.026699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.026710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.027079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.027090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.027488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.027499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.027879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.027890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.028303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.028313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.028700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.028710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.029101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.029111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.029503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.029514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.029930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.029941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.030419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.030458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.030857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.030870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.031265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.031276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.031691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.031701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.031923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.031938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.032293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.032305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.032703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.032714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.033088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.033098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.033490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.033501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.033889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.033900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.034287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.034298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.034708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.034718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.035111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.035126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.035411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.035422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.035844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.035854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.036345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.036389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.036658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.036671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.037065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.037075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.037467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.037478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.037888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.037899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.711 [2024-07-15 14:00:03.038311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.711 [2024-07-15 14:00:03.038323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.711 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.038711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.038722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.039110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.039125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.039513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.039525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.039823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.039833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.040241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.040251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.040657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.040667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.041086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.041097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.041485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.041495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.041885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.041895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.042402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.042440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.042849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.042862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.043240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.043252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.043509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.043521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.043835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.043846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.044265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.044276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.044666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.044677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.045068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.045080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.045469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.045480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.045849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.045859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.046229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.046240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.046642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.046652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.046963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.046978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.047406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.047417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.047802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.047813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.048115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.048130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.048550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.048561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.048730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.048742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.049152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.049163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.049571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.049582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.049868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.049878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.050282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.050293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.050682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.712 [2024-07-15 14:00:03.050692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.712 qpair failed and we were unable to recover it. 00:29:36.712 [2024-07-15 14:00:03.050908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.050921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.051298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.051309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.051715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.051726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.052104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.052115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.052552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.052562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.052952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.052963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.053523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.053561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.053956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.053969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.054406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.054445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.054844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.054858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.055371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.055409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.055802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.055815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.056194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.056206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.056595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.056606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.057018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.057030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.057444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.057455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.057844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.057854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.058250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.058262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.058622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.058634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.058909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.058920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.059325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.059336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.059741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.059752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.060158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.060169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.060555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.060566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.060977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.713 [2024-07-15 14:00:03.060988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.713 qpair failed and we were unable to recover it. 00:29:36.713 [2024-07-15 14:00:03.061375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.061386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.061810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.061821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.062203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.062214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.062620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.062630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.063066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.063078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.063463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.063476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.063859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.063870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.064301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.064312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.064687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.064697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.064975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.064986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.065374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.065384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.065768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.065779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.066073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.066084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.066498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.066509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.066893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.066903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.067385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.067423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.067822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.067835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.068254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.068266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.068662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.068673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.068944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.068955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.069351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.069362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.069781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.069791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.070165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.070177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.070400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.070414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.070732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.070743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.071130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.071142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.071554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.071565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.071955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.071965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.072449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.072488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.072862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.714 [2024-07-15 14:00:03.072874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.714 qpair failed and we were unable to recover it. 00:29:36.714 [2024-07-15 14:00:03.073152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.073164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.073538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.073549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.073938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.073953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.074460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.074474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.074863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.074873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.075349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.075387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.075782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.075795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.076050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.076062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.076339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.076351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.076647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.076658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.077045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.077056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.077445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.077456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.077766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.077776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.078192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.078204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.078588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.078599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.079007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.079017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.079404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.079416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.079809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.079821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.080214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.080225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.080645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.080655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.081040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.081051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.081448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.081460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.081867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.081877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.082285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.082296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.082687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.082697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.083078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.083088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.083482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.083492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.083762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.083773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.084158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.084168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.084584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.084594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.084984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.084994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.085399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.085410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.715 qpair failed and we were unable to recover it. 00:29:36.715 [2024-07-15 14:00:03.085841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.715 [2024-07-15 14:00:03.085854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.086153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.086166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.086461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.086471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.086740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.086751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.087135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.087146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.087590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.087600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.087884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.087894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.088303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.088314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.088709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.088720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.089138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.089149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.089542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.089553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.089967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.089980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.090370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.090382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.090770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.090781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.091167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.091178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.091561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.091571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.091954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.091964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.092371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.092381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.092696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.092706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.093127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.093138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.093396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.093406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.093756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.093767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.094155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.094166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.094549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.094560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.094859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.094870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.095284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.095294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.095685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.095696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.096102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.096112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.716 qpair failed and we were unable to recover it. 00:29:36.716 [2024-07-15 14:00:03.096496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.716 [2024-07-15 14:00:03.096507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.096894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.096905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.097293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.097304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.097711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.097721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.098027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.098037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.098445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.098456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.098888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.098898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.099313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.099324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.099704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.099714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.099893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.099904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.100111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.100129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.100429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.100440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.100828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.100838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.101248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.101259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.101716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.101727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.102009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.102020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.102425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.102436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.102816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.102827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.103214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.103225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.103607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.103617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.104008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.104018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.104409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.104420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.104753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.104764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.105175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.105186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.105568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.105579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.105973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.105983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.106294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.106305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.106585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.106595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.106978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.106988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.107395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.107406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.107794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.717 [2024-07-15 14:00:03.107805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.717 qpair failed and we were unable to recover it. 00:29:36.717 [2024-07-15 14:00:03.108164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.108175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.108560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.108571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.108956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.108967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.109355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.109366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.109776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.109787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.110040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.110051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.110446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.110457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.110837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.110847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.111253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.111264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.111653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.111663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.111974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.111986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.112380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.112393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.112805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.112817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.113204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.113215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.113616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.113626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.114015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.114025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.114415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.114426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.114817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.114828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.115234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.115245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.115628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.115638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.116044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.116058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.116446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.116458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.116852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.116862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.117242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.117253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.117616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.117627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.118010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.118020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.118445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.118456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.118848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.118859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.119216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.718 [2024-07-15 14:00:03.119227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.718 qpair failed and we were unable to recover it. 00:29:36.718 [2024-07-15 14:00:03.119655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.119666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.120051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.120062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.120354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.120365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.120619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.120630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.121009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.121019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.121272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.121283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.121698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.121708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.122118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.122134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.122530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.122541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.122926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.122936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.123315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.123326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.123740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.123750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.124135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.124146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.124535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.124545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.124930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.124941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.125201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.125213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.125601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.125610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.126015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.126026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.126433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.126444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.126829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.126840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.127229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.127240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.127626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.127636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.128018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.128028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.719 qpair failed and we were unable to recover it. 00:29:36.719 [2024-07-15 14:00:03.128402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.719 [2024-07-15 14:00:03.128412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.128839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.128850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.129240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.129252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.129592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.129603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.130013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.130023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.130422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.130432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.130821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.130832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.131219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.131229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.131608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.131618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.131925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.131937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.132272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.132283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.132498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.132511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.132904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.132916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.133336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.133347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.133735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.133745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.134034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.134045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.134374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.134385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.134650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.134661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.135047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.135057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.135321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.135333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.135741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.135752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.136134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.136144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.136438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.136449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.136830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.136840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.137251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.137262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.137646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.137657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.138084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.138095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.138495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.138506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.138926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.138937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.139319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.139330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.720 [2024-07-15 14:00:03.139718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.720 [2024-07-15 14:00:03.139729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.720 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.140119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.140133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.140515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.140525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.140914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.140925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.141416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.141454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.141868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.141882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.142383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.142425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.142899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.142911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.143413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.143451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.143845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.143858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.144391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.144429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.144821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.144834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.145224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.145236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.145620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.145631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.146042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.146052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.146433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.146444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.146862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.146873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.147266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.147277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.147699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.147709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.148094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.148104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.148505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.148516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.148901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.148911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.149411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.149449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.149844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.149856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.150247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.150259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.150652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.150663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.151074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.151084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.151506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.151517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.151908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.151918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.152433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.152471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.152891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.152904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.153423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.721 [2024-07-15 14:00:03.153462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.721 qpair failed and we were unable to recover it. 00:29:36.721 [2024-07-15 14:00:03.153770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.153783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.154173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.154185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.154600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.154611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.154909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.154920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.155320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.155331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.155623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.155634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.156035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.156046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.156332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.156342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.156733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.156743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.157132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.157143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.157559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.157569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.157962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.157974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.158360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.158371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.158759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.158769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.159179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.159190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.159584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.159597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.159988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.159999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.160401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.160412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.160809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.160820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.161204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.161215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.161607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.161617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.162009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.162019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.162427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.162438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.162726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.162737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.163184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.163195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.163606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.163617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.164025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.164036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.164430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.164441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.164746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.164757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.165159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.165170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.165554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.165564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.165838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.722 [2024-07-15 14:00:03.165849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.722 qpair failed and we were unable to recover it. 00:29:36.722 [2024-07-15 14:00:03.166237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.166247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.166629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.166639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.167048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.167059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.167454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.167465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.167863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.167873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.168146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.168158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.168547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.168558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.168943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.168954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.169342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.169353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.169741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.169751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.170172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.170185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.170446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.170457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.170771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.170781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.171169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.171180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.171560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.171571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.171856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.171867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.172295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.172306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.172702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.172712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.173130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.173141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.173503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.173514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.173909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.173920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.174309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.174321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.174730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.174740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.175135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.175145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.175537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.175548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.175936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.175946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.176331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.176342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.176728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.176738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.176957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.176972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.177332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.177342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.177737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.177748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.178136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.178147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.178524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.178534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.178921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.178932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.179342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.179352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.179737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.179747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.180135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.180146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.180551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.180563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.180881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.723 [2024-07-15 14:00:03.180892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.723 qpair failed and we were unable to recover it. 00:29:36.723 [2024-07-15 14:00:03.181285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.181296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.181682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.181692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.182126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.182137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.182501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.182511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.182896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.182906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.183298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.183309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.183697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.183708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.184116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.184131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.184507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.184517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.184943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.184954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.185436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.185474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.185770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.185783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.186164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.186180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.186590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.186600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.186982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.186992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.187343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.187354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.187738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.187749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.188159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.188170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.188564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.188574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.188871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.188881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.189267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.189277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.189665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.189675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.190063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.190074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.190487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.190498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.190881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.190891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.191171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.191182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.191561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.191572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.191982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.191992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.192378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.192390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.192782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.192793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.193182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.193192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.193483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.193493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.193897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.193907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.194297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.194308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-07-15 14:00:03.194599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.724 [2024-07-15 14:00:03.194610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.194993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.195003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.195413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.195424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.195841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.195851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.196238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.196249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.196614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.196626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.197012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.197023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.197470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.197481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.197869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.197879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.198192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.198202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.198585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.198596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.198982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.198993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.199457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.199467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.199852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.199862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.200251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.200262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.200460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.200475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.200882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.200893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.201304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.201314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.201707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.201717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.202105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.202116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.202508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.202519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.202738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.202749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.203039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.203050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.203440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.203451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.203878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.203889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.204144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.204156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.204564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.204575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.204985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.204995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.205312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.205323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.205705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.205716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.206100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.206110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.206491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.206502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.206930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.206940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.207352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.207365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.207749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.207761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.208332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.208372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.208770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.208783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.209208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.209219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.209606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.209617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.210036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.210046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.210438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.725 [2024-07-15 14:00:03.210449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-07-15 14:00:03.210858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.726 [2024-07-15 14:00:03.210868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-07-15 14:00:03.211252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.726 [2024-07-15 14:00:03.211263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-07-15 14:00:03.211519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.726 [2024-07-15 14:00:03.211530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-07-15 14:00:03.211921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.726 [2024-07-15 14:00:03.211932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-07-15 14:00:03.212226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.726 [2024-07-15 14:00:03.212238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-07-15 14:00:03.212656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.726 [2024-07-15 14:00:03.212671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-07-15 14:00:03.212968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.726 [2024-07-15 14:00:03.212979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-07-15 14:00:03.213366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.726 [2024-07-15 14:00:03.213377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-07-15 14:00:03.213790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.726 [2024-07-15 14:00:03.213801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-07-15 14:00:03.214260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.726 [2024-07-15 14:00:03.214271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-07-15 14:00:03.214628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.726 [2024-07-15 14:00:03.214639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-07-15 14:00:03.215027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.726 [2024-07-15 14:00:03.215038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-07-15 14:00:03.215416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.726 [2024-07-15 14:00:03.215427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-07-15 14:00:03.215811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.726 [2024-07-15 14:00:03.215822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-07-15 14:00:03.216230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.726 [2024-07-15 14:00:03.216240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-07-15 14:00:03.216473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.726 [2024-07-15 14:00:03.216488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-07-15 14:00:03.216853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.726 [2024-07-15 14:00:03.216864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-07-15 14:00:03.217250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.726 [2024-07-15 14:00:03.217261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.217647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.217659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.218046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.218057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.218439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.218450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.218826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.218836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.219227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.219238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.219634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.219646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.219950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.219961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.220359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.220370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.220760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.220770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.221025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.221036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.221371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.221382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.221780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.221790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.222180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.222192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.222494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.222504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.222884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.222894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.223274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.223286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.223637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.223648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.224038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.224049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.224440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.224452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.224723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.224734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.225138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.225150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.225557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.225568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.225981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.225991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.226461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.226499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.226895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.226908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.227402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.227439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.227847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.227860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.228163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.228174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.228432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.228445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.228832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.228842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.229251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.229262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.229634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.229644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.230042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.230053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.230510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.230521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.230928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.230939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.231319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.231330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.231723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.231733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.232033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.232044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.232499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.232510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.232897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.232908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.233332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.233343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.233602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.233613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.233746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.233756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.234220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.234231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.234618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.234628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.235018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.235028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.235423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.235434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.235818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.235828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.236227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.236237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.236615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.236626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.237034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.237045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.237424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.237435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.237823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.237833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.238220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.238232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.238675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.238685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.239064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.239076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.239461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.239472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.239860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.239870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.240172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.240183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.240460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.240471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.240899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.240909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.241299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.241310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.998 [2024-07-15 14:00:03.241718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.998 [2024-07-15 14:00:03.241729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.998 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.242108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.242118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.242511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.242521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.242920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.242930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.243428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.243466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.243861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.243874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.244265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.244277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.244717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.244728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.245141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.245152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.245546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.245557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.245954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.245965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.246337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.246348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.246658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.246668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.247047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.247057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.247445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.247456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.247840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.247851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.248262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.248274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.248665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.248676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.249063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.249075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.249455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.249466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.249874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.249884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.250272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.250283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.250692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.250702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.251089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.251100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.251510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.251521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.251910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.251920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.252411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.252450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.252845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.252858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.253269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.253281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.253673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.253684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.253968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.253978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.254382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.254392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.254811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.254821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.255339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.255384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.255782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.255796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.256111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.256128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.256408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.256419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.256684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.256694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.257080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.257091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.257555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.257567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.257944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.257954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.258474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.258511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.258911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.258924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.259403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.259441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.259851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.259864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.260379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.260418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.260658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.260673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.261066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.261077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.261493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.261504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.261882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.261893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.262286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.262297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.262692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.262702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.263110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.263121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.263511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.263522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.263911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.263922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.264407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.264447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.264753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.264767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.265166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.265178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.265598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.265608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.266000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.266011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.266413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.266424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.266808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.266823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.267212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.267223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.267612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.267623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.268039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.268050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.268589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.268600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.269069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.269080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.269466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.269477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.269891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.269901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.270378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.270416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.270815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.270828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.271216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.271228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.271671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.271682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.272110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.272121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.272503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.272513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.272815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.272825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.273120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.273140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.273412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.273422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.273825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.273835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.274142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.274153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.274553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.274563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.274945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.274955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.275394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.275432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.275828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.275841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.276355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.999 [2024-07-15 14:00:03.276393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:36.999 qpair failed and we were unable to recover it. 00:29:36.999 [2024-07-15 14:00:03.276685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.276698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.277090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.277101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.277498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.277510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.277920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.277930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.278347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.278385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.278787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.278801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.279101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.279112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.279498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.279509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.279894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.279905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.280423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.280461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.280755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.280768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.281179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.281191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.281580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.281591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.281982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.281992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.282368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.282379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.282771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.282783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.283170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.283181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.283583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.283598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.283874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.283884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.284297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.284308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.284693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.284704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.285082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.285093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.285526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.285537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.285946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.285956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.286430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.286468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.286869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.286882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.287363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.287402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.287809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.287822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.288135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.288147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.288550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.288560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.288940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.288950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.289354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.289365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.289750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.289761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.290152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.290163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.290566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.290576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.291027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.291038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.291407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.291417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.291671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.291683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.292071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.292081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.292372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.292383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.292743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.292754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.293167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.293177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.293567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.293578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.293866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.293876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.294259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.294273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.294663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.294673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.294891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.294904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.295297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.295308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.295701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.295712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.296095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.296106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1277896 Killed "${NVMF_APP[@]}" "$@" 00:29:37.000 [2024-07-15 14:00:03.296515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.296526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 14:00:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:37.000 [2024-07-15 14:00:03.296935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.296946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 14:00:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:37.000 [2024-07-15 14:00:03.297334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.297345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 14:00:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:37.000 [2024-07-15 14:00:03.297726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.297737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 14:00:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:37.000 14:00:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.000 [2024-07-15 14:00:03.298130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.298142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.298539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.298553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.298943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.298954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.299407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.299445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.299850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.299863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.300369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.300407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.300839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.300852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.301041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.301055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.301354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.301366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.301753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.301763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.302156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.302168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.302577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.302588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.302994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.303005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.000 [2024-07-15 14:00:03.303408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.000 [2024-07-15 14:00:03.303420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.000 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.303763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.303773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.304211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.304223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.304610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.304621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.305035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.305046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.305459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.305471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 14:00:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1279036 00:29:37.001 14:00:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1279036 00:29:37.001 [2024-07-15 14:00:03.305941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.305953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 14:00:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1279036 ']' 00:29:37.001 14:00:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:37.001 [2024-07-15 14:00:03.306340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.306352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 14:00:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.001 14:00:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:37.001 [2024-07-15 14:00:03.306665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.306677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 14:00:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.001 [2024-07-15 14:00:03.307072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.307084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 14:00:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:37.001 14:00:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.001 [2024-07-15 14:00:03.307479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.307491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.307883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.307896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.308325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.308337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.308726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.308737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.309042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.309054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.309459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.309470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.309885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.309896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.310369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.310380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.310764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.310775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.311062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.311073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.311381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.311392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.311677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.311688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.311982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.311992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.312306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.312317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.312722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.312733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.313128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.313140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.313554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.313565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.313983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.313994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.314381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.314421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.314819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.314833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.315115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.315135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.315517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.315528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.315948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.315960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.316447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.316485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.316886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.316900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.317427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.317465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.317853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.317867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.318398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.318437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.318840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.318856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.319365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.319403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.319829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.319842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.320226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.320237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.320606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.320617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.321045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.321055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.321576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.321587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.321969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.321980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.322488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.322526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.322929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.322942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.323428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.323466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.323863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.323875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.324346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.324384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.324788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.324801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.325271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.325283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.325681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.325692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.326082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.326094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.326487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.326498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.326932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.326942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.327349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.327388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.327799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.327812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.328205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.328217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.328700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.328710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.329094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.329105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.329494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.329506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.329845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.329856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.330405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.330444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.330855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.330868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.331178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.331190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.331593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.331604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.332010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.332021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.332426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.332437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.332832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.332843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.333243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.333255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.333625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.333637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.333853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.333864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.334086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.334097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.001 qpair failed and we were unable to recover it. 00:29:37.001 [2024-07-15 14:00:03.334466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.001 [2024-07-15 14:00:03.334478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.334901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.334912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.335116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.335132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.335501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.335512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.335906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.335919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.336360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.336398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.336796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.336809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.337068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.337079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.337528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.337540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.337956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.337967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.338347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.338385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.338784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.338797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.339250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.339261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.339652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.339664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.339924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.339936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.340331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.340342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.340722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.340733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.341168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.341179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.341629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.341640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.342059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.342070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.342493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.342504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.342797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.342808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.343229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.343240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.343667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.343679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.343990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.344001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.344418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.344430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.344852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.344863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.345239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.345250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.345581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.345591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.345985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.345995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.346394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.346406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.346637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.346650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.346859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.346870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.347156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.347167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.347557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.347567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.347979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.347990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.348308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.348319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.348628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.348639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.349026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.349037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.349444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.349455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.349718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.349728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.350159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.350170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.350570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.350580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.351046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.351056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.351447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.351458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.351871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.351882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.352234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.352245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.352410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.352421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.352791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.352803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.353063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.353074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.353495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.353507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.353820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.353831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.354234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.354246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.354637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.354648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.355060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.355071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.355473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.355484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.355930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.355941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.356246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.356258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.356654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.356665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.357060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.357071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.357530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.357542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.357962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.357972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.358512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.358550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.358956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.358970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.359506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.359544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.359968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.359981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.360127] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:37.002 [2024-07-15 14:00:03.360172] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.002 [2024-07-15 14:00:03.360481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.360518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.360732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.360743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.361219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.361231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.361371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.361381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.361645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.361655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.362024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.362035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.362508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.002 [2024-07-15 14:00:03.362519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.002 qpair failed and we were unable to recover it. 00:29:37.002 [2024-07-15 14:00:03.362887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.362898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.363133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.363144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.363549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.363560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.364052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.364062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.364461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.364473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.364673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.364688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.365003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.365014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.365322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.365333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.365769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.365780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.366197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.366209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.366651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.366662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.367072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.367083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.367545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.367557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.367944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.367955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.368357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.368368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.368660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.368671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.369041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.369052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.369464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.369476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.369732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.369743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.370141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.370152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.370390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.370402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.370684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.370695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.370968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.370979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.371241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.371253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.371666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.371677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.372076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.372089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.372513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.372525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.372782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.372793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.373101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.373112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.373539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.373551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.373940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.373951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.374349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.374360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.374777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.374787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.375227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.375238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.375528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.375538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.375949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.375960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.376386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.376397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.376700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.376710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.377129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.377140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.377510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.377521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.377945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.377956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.378445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.378483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.378890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.378903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.379419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.379457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.379766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.379779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.380176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.380188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.380584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.380595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.380992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.381002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.381422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.381433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.381832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.381843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.382079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.382089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.382495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.382506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.382922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.382937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.383327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.383338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.383730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.383740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.384135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.384146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.384553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.384563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.384954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.384965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.385281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.385291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.385585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.385596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.386000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.386011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.386424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.386436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.386663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.386673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.387071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.387082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.387480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.387490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.387914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.387925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.388364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.388376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.388783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.388795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.389211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.389223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.389640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.389651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.390027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.390038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.390426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.390437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.390861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.390872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.391284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.391295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.391597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.391608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.003 [2024-07-15 14:00:03.392023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.392034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.003 qpair failed and we were unable to recover it. 00:29:37.003 [2024-07-15 14:00:03.392442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.003 [2024-07-15 14:00:03.392454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.392712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.392724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.393100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.393111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.393334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.393352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.393609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.393621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.393915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.393925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.394337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.394349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.394761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.394772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.395184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.395196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.395611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.395622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.396018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.396028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.396292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.396303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.396719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.396730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.396987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.396997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.397355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.397365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.397760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.397771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.398074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.398085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.398491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.398502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.398898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.398909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.399168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.399179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.399378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.399388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.399794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.399805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.400226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.400237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.400634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.400645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.401066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.401076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.401471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.401483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.401784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.401794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.402192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.402203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.402619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.402629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.403023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.403034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.403511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.403523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.403921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.403931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.404351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.404362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.404747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.404757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.404934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.404946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.405311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.405322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.405731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.405742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.406010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.406020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.406296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.406307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.406578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.406588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.406813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.406824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.407063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.407073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.407538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.407549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.407870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.407880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.408301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.408315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.408522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.408532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.408942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.408953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.409213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.409224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.409558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.409568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.410014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.410024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.410449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.410459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.410764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.410775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.411110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.411120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.411541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.411551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.411949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.411959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.412355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.412366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.412786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.412796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.413187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.413198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.413622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.413632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.413911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.413923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.414225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.414236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.414720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.414730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.414990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.415001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.415411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.415424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.415703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.415714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.415920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.415931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.416333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.416344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.416734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.416744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.417087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.417098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.417410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.417421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.417815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.417825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.418224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.418235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.418622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.418633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.418825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.418836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.419196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.419207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.419484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.004 [2024-07-15 14:00:03.419495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.004 qpair failed and we were unable to recover it. 00:29:37.004 [2024-07-15 14:00:03.419887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.419897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.420317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.420328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.420719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.420730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.421136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.421147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.421422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.421434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.421839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.421850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.422238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.422249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.422632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.422643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.422901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.422911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.423321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.423332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.423724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.423734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.424128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.424139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.424522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.424533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.424933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.424944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.425458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.425496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.425924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.425936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.426317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.426355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.426760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.426773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.427186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.427197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.427634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.427645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.427868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.427883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.428197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.428208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.428489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.428500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.428920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.428931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.429205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.429217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.429606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.429617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.429965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.429976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.430371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.430381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.430893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.430903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.431285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.431295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.431685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.431696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.432086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.432097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.432498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.432509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.432899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.432910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.433130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.433144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.433508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.433519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.433929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.433943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.434426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.434464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.434865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.434878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.435368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.435406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.435653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.435666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.435980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.435991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.436381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.436392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.436792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.436802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.437311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.437349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.437741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.437754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.438163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.438175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.438367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.438378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.438760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.438771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.439167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.439178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.439597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.439608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.440002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.440012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.440419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.440430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.440683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.440696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.441106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.441117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.441536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.441547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.441971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.441982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.442390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.442401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.442798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.442809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.443202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.443213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.443615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.443626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.443972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.443982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.444400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.444411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.444809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.444820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.445063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:37.005 [2024-07-15 14:00:03.445328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.445367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.445649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.445662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.446067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.446077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.446359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.446371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.446794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.446804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.447158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.447169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.447541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.447551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.447948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.447960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.448226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.448237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.448481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.448491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.448723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.448738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.449154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.449165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.449575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.449586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.449991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.450003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.450406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.450418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.450819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.450829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.451098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.451109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.451509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.451521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.451926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.451938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.452207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.452220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.452643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.452653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.005 [2024-07-15 14:00:03.452904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.005 [2024-07-15 14:00:03.452915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.005 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.453226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.453237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.453555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.453565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.453980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.453990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.454402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.454414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.454704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.454718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.455119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.455138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.455403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.455414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.455711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.455721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.456126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.456137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.456535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.456546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.456951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.456962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.457469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.457508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.457722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.457735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.458166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.458178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.458598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.458609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.459006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.459017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.459319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.459330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.459621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.459632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.460050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.460061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.460365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.460376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.460658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.460671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.461073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.461084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.461482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.461493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.461888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.461899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.462326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.462337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.462720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.462731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.463224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.463237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.463553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.463564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.463721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.463734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.464142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.464152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.464558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.464569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.464793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.464804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.465296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.465307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.465691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.465701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.466071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.466082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.466409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.466419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.466828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.466838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.467238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.467248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.467663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.467674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.468068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.468079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.468384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.468395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.468596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.468607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.468964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.468974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.469360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.469371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.469638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.469648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.470030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.470042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.470311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.470323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.470713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.470724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.471116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.471140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.471534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.471545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.471852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.471863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.472140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.472151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.472478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.472488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.472891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.472901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.473313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.473324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.473579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.473591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.473972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.473982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.474282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.474293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.474703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.474714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.475102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.475114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.475509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.475520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.475910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.475921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.476445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.476484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.476929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.476942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.477435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.477473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.477806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.477820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.478233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.478246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.478578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.478589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.478861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.478871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.479261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.479272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.479641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.479652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.480032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.480043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.480431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.480442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.480830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.480840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.481167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.481179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.481562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.481572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.481829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.481839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.482229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.006 [2024-07-15 14:00:03.482240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.006 qpair failed and we were unable to recover it. 00:29:37.006 [2024-07-15 14:00:03.482529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.482540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.482945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.482955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.483351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.483362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.483756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.483766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.484201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.484212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.484611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.484622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.485015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.485025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.485426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.485437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.485883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.485894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.486256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.486267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.486529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.486540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.486929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.486940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.487374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.487385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.487773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.487783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.488179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.488190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.488474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.488484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.488887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.488898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.489312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.489322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.489744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.489754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.490144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.490155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.490521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.490533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.490845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.490855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.491261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.491272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.491667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.491677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.492116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.492132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.492505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.492516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.492909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.492919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.493199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.493210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.493612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.493623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.494006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.494016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.494430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.494441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.494868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.494879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.495291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.495302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.495691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.495702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.496094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.496105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.496416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.496432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.496644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.496655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.497024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.497035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.497450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.497461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.497843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.497853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.498259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.498270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.498659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.498670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.499059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.499071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.499465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.499475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.499889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.499899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.500311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.500322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.500782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.500793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.501189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.501199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.501616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.501626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.502013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.502024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.502428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.502440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.502829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.502839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.503250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.503261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.503577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.503587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.503982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.503992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.504249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.504261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.504631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.504642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.505026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.505037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.505235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.505249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.505641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.505652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.506061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.506072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.506460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.506471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.506932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.506942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.507336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.507347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.507758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.507768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.508153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.508164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.508625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.508636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.509028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.509039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.509436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.509447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.509835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.509845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.510182] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.007 [2024-07-15 14:00:03.510210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.007 [2024-07-15 14:00:03.510218] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.007 [2024-07-15 14:00:03.510224] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.007 [2024-07-15 14:00:03.510229] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.007 [2024-07-15 14:00:03.510241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.510252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.510383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:37.007 [2024-07-15 14:00:03.510536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:37.007 [2024-07-15 14:00:03.510669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.007 [2024-07-15 14:00:03.510662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:37.007 [2024-07-15 14:00:03.510679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.007 qpair failed and we were unable to recover it. 00:29:37.007 [2024-07-15 14:00:03.510663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:37.007 [2024-07-15 14:00:03.511047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.008 [2024-07-15 14:00:03.511058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.008 qpair failed and we were unable to recover it. 00:29:37.008 [2024-07-15 14:00:03.511423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.008 [2024-07-15 14:00:03.511434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.008 qpair failed and we were unable to recover it. 00:29:37.008 [2024-07-15 14:00:03.511684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.008 [2024-07-15 14:00:03.511695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.008 qpair failed and we were unable to recover it. 00:29:37.008 [2024-07-15 14:00:03.512086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.008 [2024-07-15 14:00:03.512097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.008 qpair failed and we were unable to recover it. 00:29:37.008 [2024-07-15 14:00:03.512497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.008 [2024-07-15 14:00:03.512509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.008 qpair failed and we were unable to recover it. 00:29:37.283 [2024-07-15 14:00:03.512908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.283 [2024-07-15 14:00:03.512920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.283 qpair failed and we were unable to recover it. 00:29:37.283 [2024-07-15 14:00:03.513405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.513417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.513802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.513814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.514032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.514042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.514445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.514457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.514849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.514862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.515204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.515215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.515629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.515640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.516033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.516044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.516447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.516458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.516861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.516872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.517286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.517297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.517588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.517599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.517871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.517881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.518272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.518283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.518699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.518710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.519097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.519108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.519399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.519411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.519806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.519817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.520234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.520246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.520661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.520671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.521065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.521076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.521545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.521556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.521726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.521737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.522113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.522130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.522401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.522412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.522671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.522682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.523096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.523107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.523557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.523570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.523957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.523968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.524452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.524492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.524633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.524649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.524938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.524949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.525240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.525252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.525666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.525677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.526130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.526141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.526504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.526515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.526913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.526924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.527145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.527158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.527433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.527444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.527836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.527847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.528240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.528251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.528731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.528742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.529134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.529144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.529540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.529551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.529836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.529847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.530269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.530280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.530496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.530509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.284 qpair failed and we were unable to recover it. 00:29:37.284 [2024-07-15 14:00:03.530903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.284 [2024-07-15 14:00:03.530913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.531296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.531307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.531571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.531585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.532002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.532013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.532331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.532342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.532610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.532621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.533013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.533025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.533428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.533439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.533655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.533667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.533945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.533955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.534349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.534360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.534781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.534792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.535188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.535198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.535655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.535666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.535910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.535920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.536252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.536263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.536660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.536670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.536918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.536928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.537329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.537341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.537762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.537773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.538089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.538100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.538297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.538308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.538712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.538722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.539019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.539030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.539322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.539332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.539735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.539745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.540140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.540152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.540553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.540563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.540953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.540964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.541346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.541358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.541795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.541806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.542189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.542201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.542457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.542469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.542884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.542896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.543126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.543137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.543530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.543541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.543805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.543816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.544215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.544227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.544642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.544653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.544906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.544917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.545153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.545163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.545536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.545547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.545792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.545802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.546069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.546080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.546475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.546486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.546880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.546892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.547291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.547304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.547694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.547705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.548094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.548105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.548534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.548545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.285 qpair failed and we were unable to recover it. 00:29:37.285 [2024-07-15 14:00:03.548956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.285 [2024-07-15 14:00:03.548968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.549382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.549395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.549656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.549667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.549997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.550009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.550375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.550386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.550808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.550819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.551215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.551227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.551448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.551458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.551850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.551861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.552279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.552291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.552607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.552618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.552880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.552891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.553282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.553294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.553554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.553564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.553952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.553963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.554365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.554377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.554644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.554655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.554861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.554873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.555226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.555238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.555647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.555658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.556056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.556069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.556463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.556473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.556866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.556878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.557157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.557168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.557384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.557396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.557789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.557799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.558057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.558068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.558468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.558478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.558955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.558965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.559386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.559397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.559667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.559678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.560111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.560126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.560428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.560438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.560807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.560818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.561288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.561298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.561512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.561522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.561883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.561893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.562308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.562319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.562572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.562583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.562976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.562987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.563414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.563425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.563771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.563781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.564184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.564195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.564617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.564628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.564924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.564935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.565358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.565369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.565757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.565767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.566161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.566173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.566587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.566597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.566820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.566830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.567227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.286 [2024-07-15 14:00:03.567251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.286 qpair failed and we were unable to recover it. 00:29:37.286 [2024-07-15 14:00:03.567439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.567451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.567845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.567855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.568270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.568281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.568749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.568759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.569142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.569153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.569587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.569597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.569909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.569920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.570154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.570164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.570532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.570542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.570938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.570948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.571344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.571359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.571754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.571765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.572159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.572170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.572390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.572403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.572716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.572727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.573140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.573151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.573569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.573580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.573975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.573985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.574381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.574392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.574781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.574792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.575116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.575130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.575547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.575558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.575984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.575995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.576480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.576521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.576939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.576952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.577436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.577474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.577892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.577904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.578416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.578455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.578856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.578869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.579131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.579143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.579518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.579529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.579922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.579933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.580417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.580455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.580855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.580868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.581408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.581446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.581749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.581761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.582158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.582170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.582563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.582578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.582867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.582879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.583272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.583283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.583546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.583556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.583946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.583956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.584345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.584355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.584741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.584751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.287 qpair failed and we were unable to recover it. 00:29:37.287 [2024-07-15 14:00:03.584964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.287 [2024-07-15 14:00:03.584974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.585280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.585291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.585680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.585690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.586096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.586107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.586510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.586521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.586746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.586756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.587042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.587052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.587329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.587341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.587601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.587611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.587997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.588009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.588389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.588400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.588685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.588695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.589093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.589103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.589574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.589585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.589845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.589856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.590239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.590249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.590457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.590472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.590878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.590889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.591310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.591321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.591722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.591733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.592116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.592131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.592542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.592553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.592928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.592939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.593194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.593205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.593619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.593629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.593885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.593896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.594297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.594308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.594699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.594709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.595102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.595113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.595314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.595325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.595720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.595731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.596142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.596153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.596548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.596558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.596952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.596962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.597360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.597373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.597764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.597774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.598170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.598181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.598467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.598477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.598889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.598900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.599115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.599130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.599596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.599607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.599998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.600008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.600400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.600411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.600810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.600820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.601220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.601231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.601523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.601533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.601955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.601965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.602180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.602191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.602597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.602608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.603004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.603014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.603290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.603302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.603699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.288 [2024-07-15 14:00:03.603710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.288 qpair failed and we were unable to recover it. 00:29:37.288 [2024-07-15 14:00:03.604101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.604111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.604559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.604569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.604953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.604963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.605344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.605355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.605737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.605748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.606146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.606158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.606579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.606590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.606886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.606896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.607171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.607181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.607533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.607546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.607960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.607970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.608448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.608459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.608684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.608694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.609058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.609069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.609484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.609495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.609822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.609832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.610205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.610216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.610471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.610481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.610861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.610872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.611304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.611314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.611546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.611556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.611952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.611964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.612266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.612276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.612555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.612566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.613002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.613012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.613428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.613438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.613665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.613675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.613860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.613872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.614345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.614356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.614749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.614760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.614989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.614999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.615395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.615405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.615700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.615710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.616003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.616014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.616419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.616430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.616820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.616830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.617242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.617253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.617653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.617663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.618082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.618093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.618475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.618486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.618883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.618893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.619107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.619118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.619510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.619521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.619871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.619881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.620356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.620394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.620796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.620809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.621226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.621238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.621598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.621608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.622012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.622023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.622423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.622435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.622691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.622707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.623101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.623111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.623589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.623603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.623989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.624000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.289 qpair failed and we were unable to recover it. 00:29:37.289 [2024-07-15 14:00:03.624480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.289 [2024-07-15 14:00:03.624519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.624725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.624737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.624946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.624956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.625315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.625326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.625744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.625755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.626147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.626159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.626541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.626551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.626994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.627004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.627399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.627410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.627804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.627814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.628211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.628222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.628622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.628633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.629041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.629053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.629454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.629465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.629854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.629864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.630086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.630100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.630488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.630499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.630894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.630905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.631131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.631143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.631441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.631452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.631872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.631882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.632312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.632322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.632595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.632605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.632994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.633007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.633308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.633320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.633401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.633413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.633558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.633568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.633862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.633872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.634136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.634147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.634549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.634560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.634652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.634661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.634989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.634999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.635396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.635407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.635822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.635832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.636114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.636129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.636514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.636524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.636913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.636924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.637351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.637363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.637788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.637800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.638063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.638074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.638467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.638477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.638847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.638857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.639247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.639258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.639520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.639531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.639922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.639932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.640214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.640225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.640627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.640637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.640904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.640916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.641307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.641318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.641737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.290 [2024-07-15 14:00:03.641747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.290 qpair failed and we were unable to recover it. 00:29:37.290 [2024-07-15 14:00:03.642120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.642135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.642509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.642519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.642911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.642922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.643452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.643490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.643893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.643907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.644287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.644326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.644662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.644675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.645047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.645057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.645313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.645325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.645801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.645812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.646207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.646218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.646306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.646315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Write completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Write completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Write completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Write completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Write completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Write completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Write completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Write completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 Read completed with error (sct=0, sc=8) 00:29:37.291 starting I/O failed 00:29:37.291 [2024-07-15 14:00:03.646533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.291 [2024-07-15 14:00:03.646791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.646803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.647064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.647072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.647555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.647584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.647816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.647826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.648023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.648031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.648289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.648300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.648699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.648709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.649334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.649363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.649763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.649772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.649981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.649989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.650471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.650500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.650585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.650593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.650906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.650914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.651332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.651340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.651550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.651557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.651912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.651920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.652323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.652331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.652646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.652653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.653061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.653069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.653471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.653480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.653870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.653877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.654294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.654302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.654556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.654567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.654961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.654969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.655221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.655229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.655401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.655410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.655811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.655818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.656022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.656030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.656384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.656392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.656846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.656854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.657051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.291 [2024-07-15 14:00:03.657059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.291 qpair failed and we were unable to recover it. 00:29:37.291 [2024-07-15 14:00:03.657326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.657334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.657663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.657671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.657885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.657892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.658278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.658285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.658685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.658693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.659086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.659094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.659515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.659523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.659744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.659752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.660158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.660166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.660478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.660486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.660709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.660717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.661021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.661028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.661437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.661446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.661835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.661842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.662120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.662131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.662412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.662420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.662840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.662848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.663061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.663071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.663299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.663308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.663708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.663716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.664106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.664114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.664502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.664511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.664803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.664810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.665116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.665125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.665502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.665509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.665732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.665740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.666178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.666186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.666420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.666429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.666820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.666828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.667217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.667225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.667628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.667635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.667854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.667864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.668262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.668270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.668480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.668488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.668849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.668857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.669246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.669253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.669676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.669683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.670074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.670081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.670337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.670345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.670542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.670551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.670706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.670714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.670919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.670927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.671344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.671352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.671606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.671614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.671867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.671875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.672264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.672272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.672687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.672695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.673168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.673176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.673484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.673491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.673884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.673891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.674257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.674265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.674643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.674651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.675043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.675051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.675440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.675449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.675513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.675522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.675887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.675894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.676290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.676298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.676700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.292 [2024-07-15 14:00:03.676708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.292 qpair failed and we were unable to recover it. 00:29:37.292 [2024-07-15 14:00:03.676965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.676973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.677367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.677375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.677769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.677777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.678171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.678179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.678567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.678574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.678985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.678993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.679380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.679389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.679784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.679791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.680180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.680188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.680451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.680459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.680846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.680853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.681025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.681033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.681257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.681265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.681720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.681730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.682112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.682120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.682537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.682544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.682990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.682997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.683390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.683398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.683789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.683796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.684132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.684142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.684593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.684602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.684836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.684845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.685099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.685107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.685493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.685501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.685791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.685799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.686016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.686024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.686210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.686220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.686633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.686641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.686898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.686905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.687084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.687093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.687479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.687488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.687884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.687892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.688359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.688367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.688788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.688795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.689182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.689190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.689557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.689565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.689946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.689953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.690165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.690173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.690595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.690603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.690996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.691004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.691393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.691401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.691811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.691819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.692289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.692297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.692471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.692480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.692891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.692899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.693193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.693201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.693635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.693643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.694083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.694091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.694481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.694489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.694742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.694751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.695137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.695145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.695320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.695329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.293 qpair failed and we were unable to recover it. 00:29:37.293 [2024-07-15 14:00:03.695740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.293 [2024-07-15 14:00:03.695748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.696179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.696189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.696578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.696586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.696976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.696983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.697376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.697384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.697796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.697804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.698009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.698017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.698414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.698422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.698619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.698627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.698987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.698995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.699204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.699212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.699701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.699709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.700131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.700139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.700522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.700530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.700635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.700642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.700946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.700953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.701345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.701353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.701611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.701619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.702003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.702010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.702414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.702422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.702815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.702822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.703183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.703191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.703589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.703597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.703989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.703997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.704387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.704396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.704806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.704814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.705205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.705214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.705675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.705683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.705898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.705905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.706296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.706304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.706560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.706568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.706959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.706966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.707355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.707363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.707773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.707780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.708167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.708175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.708556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.708564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.709036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.709044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.709188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.709197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.709471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.709480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.709867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.709875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.710156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.710165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.710550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.710562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.710934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.710942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.711334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.711342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.711638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.711646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.712055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.712063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.712317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.712325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.712725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.712732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.713120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.713131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.713598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.713606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.713892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.713900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.714100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.714108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.714408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.714416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.714660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.714667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.715061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.715068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.715461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.715469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.715863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.715871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.294 [2024-07-15 14:00:03.716323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.294 [2024-07-15 14:00:03.716331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.294 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.716575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.716584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.716979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.716986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.717194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.717201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.717616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.717624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.718013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.718021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.718442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.718450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.718586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.718594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.719046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.719054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.719310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.719318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.719709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.719717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.720132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.720140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.720501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.720509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.720911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.720918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.721302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.721310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.721716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.721723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.721978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.721986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.722374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.722382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.722636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.722644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.722854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.722862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.723264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.723272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.723680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.723688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.723929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.723937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.724342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.724349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.724645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.724654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.725039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.725046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.725443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.725451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.725840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.725848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.726261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.726268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.726660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.726667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.726949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.726957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.727293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.727301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.727707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.727715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.728099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.728107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.728489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.728497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.728879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.728886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.729269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.729277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.729498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.729506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.729912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.729919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.730369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.730377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.730594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.730602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.730843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.730851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.731246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.731254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.731662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.731669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.731920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.731927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.732301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.732309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.732695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.732703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.732783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.732789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.733143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.733151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.733417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.733425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.733812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.733820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.734210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.734218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.734635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.734642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.734862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.734869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.735205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.735213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.735423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.735430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.735684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.735692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.735897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.735905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.295 [2024-07-15 14:00:03.736270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.295 [2024-07-15 14:00:03.736278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.295 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.736482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.736491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.736850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.736858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.737124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.737133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.737512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.737519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.737909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.737916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.738306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.738316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.738576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.738584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.738960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.738968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.739367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.739375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.739577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.739585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.739987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.739994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.740253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.740261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.740482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.740489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.740853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.740861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.741208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.741217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.741611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.741619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.741875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.741883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.742275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.742282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.742666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.742674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.742949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.742956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.743354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.743362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.743559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.743567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.743954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.743962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.744362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.744370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.744761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.744768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.745092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.745100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.745513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.745521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.745949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.745957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.746470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.746499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.746904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.746913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.747427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.747457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.747557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.747565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.747695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.747703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.748187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.748196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.748608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.748616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.749006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.749014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.749418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.749426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.749825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.749832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.750039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.750049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.750274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.750282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.750485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.750493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.750847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.750855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.751266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.751274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.751661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.751669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.751922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.751930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.752361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.752371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.752784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.752792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.753181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.753189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.753499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.753507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.753782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.753789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.754230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.754238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.754619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.754627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.755019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.755027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.296 qpair failed and we were unable to recover it. 00:29:37.296 [2024-07-15 14:00:03.755411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.296 [2024-07-15 14:00:03.755419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.755834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.755842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.756229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.756237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.756617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.756626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.757021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.757029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.757094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.757100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.757357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.757366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.757613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.757621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.758016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.758023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.758427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.758434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.758860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.758868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.759125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.759133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.759509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.759516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.759906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.759913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.760273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.760281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.760483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.760491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.760892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.760900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.761296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.761304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.761720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.761727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.762120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.762130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.762563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.762570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.762814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.762822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.763098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.763105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.763361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.763368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.763749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.763757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.764150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.764158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.764614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.764622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.765007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.765014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.765421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.765429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.765825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.765833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.766035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.766044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.766282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.766291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.766508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.766519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.766866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.766874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.767295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.767303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.767558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.767566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.768041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.768048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.768445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.768453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.768869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.768877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.769103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.769111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.769503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.769511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.769901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.769909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.770308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.770316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.770702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.770710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.770964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.770972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.771131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.771140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.771521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.771529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.771918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.771925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.772253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.772261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.772662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.772669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.773049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.773058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.773312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.773321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.773713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.773721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.773993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.774001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.774204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.774214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.774613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.774621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.774843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.774850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.775054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.775061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.775472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.775480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.775777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.775785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.776016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.776024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.297 [2024-07-15 14:00:03.776490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.297 [2024-07-15 14:00:03.776498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.297 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.776627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.776634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.777032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.777040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.777429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.777437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.777824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.777832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.778262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.778270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.778648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.778656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.778862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.778870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.779286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.779294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.779635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.779643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.780028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.780036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.780407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.780418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.780808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.780815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.781234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.781242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.781652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.781660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.781861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.781870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.782067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.782075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.782441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.782449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.782911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.782919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.783297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.783305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.783561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.783569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.783775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.783783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.784196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.784204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.784607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.784615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.785028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.785035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.785425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.785433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.785611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.785620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.785985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.785993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.786373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.786380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.786762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.786769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.787180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.787188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.787653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.787661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.788053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.788061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.788447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.788455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.788873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.788881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.789320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.789329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.789612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.789621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.790015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.790022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.790414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.790422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.790650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.790657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.790852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.790860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.791226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.791234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.791634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.791642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.792029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.792037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.792473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.792480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.792754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.792762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.793179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.793187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.793561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.793568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.793962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.793970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.794403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.794412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.794791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.794800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.795055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.795064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.795451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.795459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.795678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.795686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.795883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.795893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.796257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.796264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.796534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.796542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.796935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.796943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.797357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.797365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.797753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.797761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.798153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.798161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.298 [2024-07-15 14:00:03.798425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.298 [2024-07-15 14:00:03.798433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.298 qpair failed and we were unable to recover it. 00:29:37.299 [2024-07-15 14:00:03.798848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.299 [2024-07-15 14:00:03.798856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.299 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.799275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.799285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.799730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.799738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.800044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.800052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.800446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.800455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.800843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.800850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.801054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.801061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.801245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.801252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.801519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.801526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.801971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.801980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.802176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.802185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.802548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.802556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.802970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.802978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.803313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.803320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.803703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.803710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.804103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.804111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.804373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.804381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.804570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.804578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.804976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.804984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.805403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.805411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.570 qpair failed and we were unable to recover it. 00:29:37.570 [2024-07-15 14:00:03.805822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.570 [2024-07-15 14:00:03.805830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.806218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.806226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.806629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.806637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.807016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.807023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.807327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.807335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.807755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.807762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.808153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.808161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.808584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.808592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.808798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.808805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.809208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.809218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.809613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.809621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.810050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.810057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.810368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.810376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.810762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.810770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.811161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.811169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.811369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.811377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.811759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.811767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.812154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.812162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.812554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.812562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.812780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.812787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.813075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.813082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.813488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.813496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.813887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.813895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.814286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.814294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.814514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.814522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.814906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.814915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.815135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.815144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.815435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.815442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.815864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.815872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.816132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.816140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.816539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.816546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.816940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.816948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.817027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.817033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.817355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.817362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.817619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.817627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.818023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.818031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.818426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.818434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.818688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.818696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.819083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.819090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.819483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.819493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.571 qpair failed and we were unable to recover it. 00:29:37.571 [2024-07-15 14:00:03.819922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.571 [2024-07-15 14:00:03.819930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.820341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.820349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.820738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.820746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.820808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.820815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.821142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.821150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.821535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.821543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.821925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.821933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.822327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.822336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.822598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.822607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.822819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.822829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.823203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.823211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.823590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.823598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.823986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.823994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.824425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.824433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.824641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.824648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.825010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.825018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.825502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.825510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.825701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.825710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.826101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.826108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.826490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.826499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.826758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.826766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.827165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.827173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.827576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.827584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.827974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.827982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.828244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.828251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.828655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.828663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.829074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.829081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.829495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.829503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.829898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.829906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.830195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.830202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.830618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.830626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.830822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.830831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.831274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.831281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.831671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.831679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.832093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.832101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.832507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.832515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.832909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.832917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.833137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.833145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.833537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.833545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.833803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.833811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.572 [2024-07-15 14:00:03.834206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.572 [2024-07-15 14:00:03.834214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.572 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.834471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.834479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.834893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.834901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.835106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.835115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.835419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.835427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.835821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.835829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.836205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.836213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.836456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.836464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.836858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.836866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.837256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.837264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.837683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.837691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.838089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.838098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.838507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.838515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.838929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.838937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.839353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.839361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.839765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.839773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.839977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.839985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.840372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.840379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.840445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.840452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.840833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.840841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.841261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.841269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.841654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.841661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.841868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.841876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.842081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.842088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.842455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.842464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.842852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.842860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.843244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.843251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.843646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.843654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.844067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.844075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.844457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.844465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.844859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.844866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.845256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.845264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.845677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.845686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.846073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.846082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.846481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.846489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.846917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.846925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.847425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.847458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.847855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.847865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.848262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.848271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.848458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.848467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.848835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.848843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.573 qpair failed and we were unable to recover it. 00:29:37.573 [2024-07-15 14:00:03.849067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.573 [2024-07-15 14:00:03.849076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.849488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.849496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.849900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.849908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.850133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.850142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.850548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.850556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.850846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.850854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.851180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.851188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.851452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.851460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.851716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.851725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.852058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.852066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.852483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.852490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.852569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.852576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.852934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.852941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.853203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.853211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.853611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.853619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.853840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.853848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.854040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.854050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.854458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.854466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.854719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.854726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.855119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.855132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.855599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.855607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.855999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.856007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.856417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.856425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.856822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.856830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.857253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.857262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.857657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.857665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.857952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.857960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.858347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.858355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.574 [2024-07-15 14:00:03.858782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.574 [2024-07-15 14:00:03.858791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.574 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.859078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.859086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.859343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.859353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.859751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.859760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.860180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.860188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.860594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.860602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.860862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.860871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.861174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.861184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.861594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.861603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.862018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.862027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.862281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.862290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.862684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.862693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.862822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.862830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.863104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.863113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.863317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.863325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.863692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.863700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.864118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.864131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.864519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.864526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.864797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.864806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.865020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.865029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.865414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.865422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.865810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.865818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.866112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.866120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.866505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.866513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.866736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.866745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.867089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.867097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.867292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.867300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.867710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.575 [2024-07-15 14:00:03.867718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.575 qpair failed and we were unable to recover it. 00:29:37.575 [2024-07-15 14:00:03.868133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.868142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.868544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.868552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.868948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.868956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.869288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.869296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.869706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.869714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.870103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.870112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.870379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.870387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.870803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.870812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.871225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.871233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.871523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.871531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.871767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.871774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.872203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.872211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.872493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.872502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.872890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.872899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.873249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.873258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.873542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.873550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.873760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.873770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.874139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.874147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.874334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.874342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.874751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.874761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.875184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.875191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.875480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.875488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.875912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.875920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.876178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.876186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.876580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.876587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.876975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.876983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.877184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.877193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.877605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.877612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.877866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.877874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.878174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.878182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.878592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.878599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.878810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.878818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.879203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.879218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.879287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.879296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.879663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.879671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.880151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.880159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.880370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.880379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.880783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.880791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.881184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.881194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.881619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.576 [2024-07-15 14:00:03.881628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.576 qpair failed and we were unable to recover it. 00:29:37.576 [2024-07-15 14:00:03.882054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.882062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.882508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.882516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.882762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.882771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.883186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.883194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.883591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.883599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.883997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.884005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.884260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.884268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.884663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.884671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.885072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.885079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.885466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.885474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.885676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.885686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.885981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.885989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.886370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.886378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.886811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.886819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.887242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.887250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.887549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.887558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.887947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.887956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.888345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.888353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.888681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.888690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.889075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.889086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.889467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.889475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.889866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.889873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.890095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.890103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.890285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.890294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.890564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.890572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.890968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.890976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.891362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.891370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.891756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.891764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.892157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.892166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.892386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.892394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.892796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.892804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.893005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.893014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.893375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.893383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.893770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.893779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.894196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.894204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.894435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.894443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.894622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.894630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.895016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.895024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.895224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.895232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.895637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.895645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.896035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.577 [2024-07-15 14:00:03.896042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.577 qpair failed and we were unable to recover it. 00:29:37.577 [2024-07-15 14:00:03.896437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.896445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.896860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.896868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.897315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.897323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.897715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.897723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.897926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.897935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.898098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.898105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.898468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.898475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.898785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.898793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.899186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.899194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.899373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.899381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.899768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.899776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.899973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.899981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.900162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.900170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.900424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.900432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.900822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.900831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.901222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.901230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.901627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.901635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.902047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.902055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.902451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.902461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.902725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.902733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.902914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.902922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.903109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.903116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.903504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.903512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.903907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.903914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.904276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.904284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.904692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.904700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.905089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.905097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.905362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.905370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.905648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.905656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.905879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.905888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.906157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.906164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.906577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.906584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.906806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.906815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.907192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.907200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.907606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.907614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.908010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.908018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.908429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.908437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.908574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.908580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.908779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.908787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.909036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.578 [2024-07-15 14:00:03.909045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.578 qpair failed and we were unable to recover it. 00:29:37.578 [2024-07-15 14:00:03.909490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.909498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.909718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.909726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.909912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.909920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.910107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.910116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.910373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.910381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.910795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.910803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.911191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.911199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.911607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.911615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.912003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.912011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.912376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.912383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.912786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.912793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.913100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.913108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.913470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.913479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.913808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.913816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.914208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.914216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.914629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.914637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.915030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.915037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.915431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.915439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.915694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.915703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.915893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.915902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.916298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.916306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.916738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.916746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.917141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.917149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.917522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.917529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.917927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.917935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.918351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.918359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.918569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.918577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.918934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.918941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.919347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.919354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.919421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.919427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.919796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.919804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.920199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.920208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.920606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.920613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.579 [2024-07-15 14:00:03.920899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.579 [2024-07-15 14:00:03.920907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.579 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.921318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.921326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.921704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.921712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.921917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.921926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.922331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.922338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.922754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.922762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.923062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.923070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.923460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.923468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.923866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.923873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.924288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.924296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.924685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.924693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.925107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.925116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.925481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.925489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.925903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.925912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.926425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.926454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.926855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.926864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.927359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.927388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.927817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.927827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.928251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.928259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.928688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.928696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.929130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.929138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.929494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.929502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.929881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.929888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.930271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.930279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.930679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.930687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.931001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.931013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.931206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.931215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.931628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.931636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.932074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.932082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.932500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.932508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.932888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.932895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.933117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.933131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.933478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.933486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.933786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.933794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.934188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.934197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.934394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.580 [2024-07-15 14:00:03.934402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.580 qpair failed and we were unable to recover it. 00:29:37.580 [2024-07-15 14:00:03.934814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.934822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.935199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.935207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.935596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.935603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.935860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.935868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.936124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.936132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.936435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.936443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.936663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.936670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.937032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.937040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.937308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.937316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.937493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.937502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.937865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.937873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.938271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.938279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.938671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.938679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.938933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.938941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.939365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.939373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.939669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.939677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.940093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.940100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.940515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.940523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.940915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.940922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.941343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.941351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.941743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.941751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.942027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.942035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.942272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.942280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.942543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.942551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.942943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.942950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.943374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.943382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.943773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.943780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.944171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.944179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.944577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.944585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.945038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.945048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.945428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.945436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.945645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.945652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.946057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.946065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.946462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.946469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.946860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.946867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.947259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.947267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.947532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.947540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.947995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.581 [2024-07-15 14:00:03.948003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.581 qpair failed and we were unable to recover it. 00:29:37.581 [2024-07-15 14:00:03.948204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.948212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.948589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.948597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.948985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.948993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.949381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.949389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.949805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.949813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.950193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.950201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.950554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.950562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.950954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.950961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.951352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.951360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.951752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.951761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.952152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.952160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.952555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.952563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.952955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.952962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.953351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.953359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.953821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.953829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.954211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.954219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.954621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.954629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.954914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.954922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.955329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.955337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.955628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.955636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.956048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.956056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.956268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.956277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.956721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.956728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.957022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.957030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.957456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.957463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.957849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.957856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.958244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.958252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.958470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.958478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.958826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.958833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.959092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.959100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.959497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.959505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.959891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.959901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.960325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.960335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.960551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.960558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.960812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.960820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.582 qpair failed and we were unable to recover it. 00:29:37.582 [2024-07-15 14:00:03.961249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.582 [2024-07-15 14:00:03.961257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.961319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.961328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.961717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.961724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.961943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.961950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.962340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.962346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.962721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.962728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.963146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.963153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.963587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.963594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.963963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.963969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.964278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.964285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.964673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.964679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.964894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.964901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.965278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.965285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.965752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.965759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.966135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.966144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.966519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.966525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.966737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.966745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.966949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.966965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.967185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.967193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.967547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.967554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.967800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.967806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.968118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.968128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.968534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.968540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.968961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.968969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.969160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.969167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.969551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.969558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.969927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.969934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.970370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.970377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.970755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.970762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.971142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.971149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.971595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.971601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.971973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.971979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.972366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.972373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.972781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.972788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.973203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.973210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.973599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.973606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.973968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.973977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.974176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.974183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.974605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.974611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.974983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.974989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.975378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.583 [2024-07-15 14:00:03.975385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.583 qpair failed and we were unable to recover it. 00:29:37.583 [2024-07-15 14:00:03.975669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.975676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.976142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.976149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.976412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.976420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.976831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.976837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.977083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.977089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.977418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.977425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.977815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.977822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.978192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.978200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.978409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.978417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.978780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.978787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.979168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.979175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.979479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.979487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.979860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.979867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.980279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.980285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.980510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.980516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.980908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.980914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.981266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.981273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.981675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.981682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.981975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.981982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.982463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.982471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.982865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.982872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.983274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.983281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.983754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.983761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.984132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.984139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.984534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.984542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.984956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.984964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.985350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.985357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.985588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.985595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.985815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.985822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.986270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.986277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.986736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.986742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.987160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.987167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.987562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.987569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.987941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.987948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.988336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.988343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.988512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.988520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.988586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.988592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.988987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.988994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.989376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.989383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.989851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.989857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.584 [2024-07-15 14:00:03.990232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.584 [2024-07-15 14:00:03.990238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.584 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.990421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.990428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.990868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.990875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.991296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.991303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.991674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.991681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.992053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.992060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.992497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.992504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.992890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.992896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.993293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.993300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.993696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.993704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.993967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.993975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.994361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.994368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.994738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.994745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.995141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.995148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.995507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.995513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.995711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.995718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.996088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.996095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.996480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.996487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.996870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.996876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.997129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.997136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.997560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.997566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.997945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.997951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.998170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.998177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.998387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.998394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.998610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.998616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.998848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.998854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.999275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.999282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:03.999654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:03.999660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:04.000088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:04.000095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:04.000537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:04.000543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:04.000922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:04.000929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:04.001351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:04.001359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:04.001691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:04.001697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:04.001996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:04.002002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:04.002490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:04.002497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:04.002891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:04.002899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:04.003390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:04.003418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:04.003642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:04.003650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:04.003850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:04.003859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:04.004138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:04.004146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:04.004427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.585 [2024-07-15 14:00:04.004433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.585 qpair failed and we were unable to recover it. 00:29:37.585 [2024-07-15 14:00:04.004709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.004716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.005118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.005135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.005573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.005580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.005972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.005979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.006384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.006392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.006687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.006694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.007088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.007095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.007547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.007554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.007757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.007764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.008058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.008065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.008435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.008442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.008636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.008644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.008935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.008941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.009391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.009399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.009637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.009643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.009921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.009927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.010282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.010289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.010672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.010680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.010955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.010962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.011038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.011044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.011401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.011409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.011477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.011484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.011766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.011772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.012155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.012162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.012528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.012534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.012800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.586 [2024-07-15 14:00:04.012806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.586 qpair failed and we were unable to recover it. 00:29:37.586 [2024-07-15 14:00:04.012997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.013004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.013258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.013266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.013735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.013741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.013940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.013948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.014333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.014340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.014719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.014726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.015098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.015104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.015539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.015545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.015845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.015853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.016004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.016011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.016426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.016433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.016809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.016815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.017186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.017193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.017490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.017497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.017959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.017965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.018343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.018350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.018724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.018731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.019125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.019133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.019413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.019421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.019712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.587 [2024-07-15 14:00:04.019720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.587 qpair failed and we were unable to recover it. 00:29:37.587 [2024-07-15 14:00:04.020110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.020117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.020492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.020500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.020802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.020810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.021202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.021209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.021451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.021457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.021856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.021862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.022136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.022143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.022512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.022519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.022906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.022912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.023205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.023212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.023604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.023612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.023990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.023997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.024427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.024434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.024859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.024866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.025248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.025255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.025631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.025640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.025853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.025861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.026261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.026268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.026580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.026587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.026781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.026787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.027220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.027227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.027704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.027710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.028153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.028160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.028553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.028560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.028760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.028766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.588 [2024-07-15 14:00:04.029170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.588 [2024-07-15 14:00:04.029177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.588 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.029419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.029427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.029635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.029642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.030018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.030025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.030424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.030431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.030623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.030630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.031059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.031066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.031360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.031367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.031768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.031775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.031979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.031985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.032388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.032395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.032764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.032771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.033144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.033151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.033367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.033373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.033783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.033790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.034225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.034233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.034664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.034671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.034912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.034919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.035328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.035335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.035553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.035559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.035830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.035837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.036231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.036238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.036514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.036520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.036798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.036804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.037083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.037090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.037478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.037485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.037857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.589 [2024-07-15 14:00:04.037864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.589 qpair failed and we were unable to recover it. 00:29:37.589 [2024-07-15 14:00:04.038161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.038168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.038533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.038540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.038997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.039004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.039394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.039402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.039710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.039717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.040113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.040123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.040505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.040512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.040884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.040890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.041384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.041412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.041832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.041840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.042136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.042144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.042227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.042234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.042460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.042466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.042856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.042862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.043236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.043243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.043649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.043656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.043951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.043958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.044158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.044166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.044518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.044526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.044924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.044932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.045227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.045234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.045644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.045651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.045953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.045960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.046437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.046444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.046813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.046821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.047192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.047199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.047610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.047617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.590 [2024-07-15 14:00:04.047987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.590 [2024-07-15 14:00:04.047994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.590 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.048439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.048447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.048817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.048825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.049352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.049381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.049840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.049849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.050367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.050395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.050702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.050710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.051086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.051095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.051499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.051507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.051882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.051889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.052358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.052386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.052806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.052814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.053241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.053248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.053522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.053529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.053824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.053832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.054265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.054273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.054486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.054498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.054878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.054885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.055301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.055309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.055682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.055691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.056104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.056111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.056503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.056511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.056905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.056912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.057131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.057138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.057513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.057521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.057895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.057903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.058012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.058019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.058404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.058411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.058626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.058633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.058836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.058843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.059196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.059204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.059619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.059626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.059919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.059927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.060304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.060312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.060599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.060606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.060903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.591 [2024-07-15 14:00:04.060911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.591 qpair failed and we were unable to recover it. 00:29:37.591 [2024-07-15 14:00:04.061365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.061372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.061765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.061772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.062049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.062056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.062327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.062334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.062789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.062796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.063174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.063181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.063370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.063378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.063762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.063769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.064055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.064062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.064240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.064247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.064489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.064496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.064764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.064771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.065158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.065166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.065383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.065391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.065796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.065802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.065901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.065910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.066097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.066105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.066538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.066545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.066920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.066927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.067318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.067325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.067714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.067723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.068102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.068110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.068504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.068512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.068962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.068970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.069260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.069267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.069665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.069673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.070043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.070050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.070222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.070229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.070670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.070677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.071134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.592 [2024-07-15 14:00:04.071141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.592 qpair failed and we were unable to recover it. 00:29:37.592 [2024-07-15 14:00:04.071566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.071573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.071863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.071870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.072110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.072117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.072522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.072529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.072779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.072787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.073089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.073097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.073301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.073309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.073591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.073598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.074010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.074017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.074407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.074414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.074818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.074825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.075110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.075118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.075517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.075524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.075896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.075904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.076298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.076305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.076516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.076523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.076933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.076940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.077359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.077366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.077743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.077750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.077951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.077958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.078404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.078411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.078817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.593 [2024-07-15 14:00:04.078825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.593 qpair failed and we were unable to recover it. 00:29:37.593 [2024-07-15 14:00:04.079221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.594 [2024-07-15 14:00:04.079228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.594 qpair failed and we were unable to recover it. 00:29:37.594 [2024-07-15 14:00:04.079424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.594 [2024-07-15 14:00:04.079432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.594 qpair failed and we were unable to recover it. 00:29:37.594 [2024-07-15 14:00:04.079838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.594 [2024-07-15 14:00:04.079845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.594 qpair failed and we were unable to recover it. 00:29:37.594 [2024-07-15 14:00:04.080094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.594 [2024-07-15 14:00:04.080102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.594 qpair failed and we were unable to recover it. 00:29:37.594 [2024-07-15 14:00:04.080485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.594 [2024-07-15 14:00:04.080493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.594 qpair failed and we were unable to recover it. 00:29:37.594 [2024-07-15 14:00:04.080708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.594 [2024-07-15 14:00:04.080715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.594 qpair failed and we were unable to recover it. 00:29:37.594 [2024-07-15 14:00:04.081233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.594 [2024-07-15 14:00:04.081240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.594 qpair failed and we were unable to recover it. 00:29:37.594 [2024-07-15 14:00:04.081618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.594 [2024-07-15 14:00:04.081626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.594 qpair failed and we were unable to recover it. 00:29:37.594 [2024-07-15 14:00:04.081839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.594 [2024-07-15 14:00:04.081849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.594 qpair failed and we were unable to recover it. 00:29:37.594 [2024-07-15 14:00:04.082248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.594 [2024-07-15 14:00:04.082256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.594 qpair failed and we were unable to recover it. 00:29:37.594 [2024-07-15 14:00:04.082530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.594 [2024-07-15 14:00:04.082539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.594 qpair failed and we were unable to recover it. 00:29:37.594 [2024-07-15 14:00:04.082814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.594 [2024-07-15 14:00:04.082821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.594 qpair failed and we were unable to recover it. 00:29:37.594 [2024-07-15 14:00:04.083074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.594 [2024-07-15 14:00:04.083081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.594 qpair failed and we were unable to recover it. 00:29:37.594 [2024-07-15 14:00:04.083299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.594 [2024-07-15 14:00:04.083307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.594 qpair failed and we were unable to recover it. 00:29:37.594 [2024-07-15 14:00:04.083717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.594 [2024-07-15 14:00:04.083724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.594 qpair failed and we were unable to recover it. 00:29:37.594 [2024-07-15 14:00:04.084096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.594 [2024-07-15 14:00:04.084103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.594 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.084502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.084511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.084891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.084899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.085332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.085339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.085551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.085558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.085807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.085815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.086026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.086035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.086445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.086453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.086833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.086841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.087236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.087244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.087613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.087620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.087992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.087999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.088456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.088463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.088669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.088675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.088964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.088972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.089379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.089387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.089768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.089775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.090157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.090164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.090568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.090575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.090957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.090965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.091397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.091405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.091872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.091879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.092385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.092413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.092797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.092807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.093183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.093190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.093604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.093611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.093768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.093775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.094213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.094221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.094491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.094498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.094967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.094974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.095359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.095366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.095770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.095777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.096152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.096160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.862 qpair failed and we were unable to recover it. 00:29:37.862 [2024-07-15 14:00:04.096551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.862 [2024-07-15 14:00:04.096561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.096808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.096815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.097023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.097030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.097436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.097444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.097820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.097827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.098040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.098046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.098248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.098255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.098659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.098666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.098914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.098921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.099300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.099307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.099606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.099612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.099996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.100002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.100437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.100445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.100660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.100667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.101070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.101077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.101450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.101457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.101757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.101764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.101954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.101961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.102182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.102189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.102413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.102419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.102832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.102839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.103298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.103305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.103612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.103619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.104011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.104018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.104300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.104307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.104707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.104714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.104923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.104930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.105176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.105183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.105576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.105583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.106028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.106036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.106456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.106462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.106854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.106861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.107271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.107277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.107654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.107661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.108126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.108133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.108431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.108437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.108837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.108844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.109229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.109236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.109527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.109534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.109930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.109938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.110248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.863 [2024-07-15 14:00:04.110257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.863 qpair failed and we were unable to recover it. 00:29:37.863 [2024-07-15 14:00:04.110632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.110639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.111094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.111101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.111487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.111494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.111806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.111812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.112281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.112288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.112669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.112675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.113042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.113049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.113460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.113468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.113694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.113701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.113904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.113911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.114317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.114325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.114532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.114540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.114950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.114958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.115117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.115134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.115519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.115526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.115631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.115638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.116083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.116089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.116496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.116503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.116945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.116952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.117358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.117365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.117578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.117585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.117879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.117886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.118247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.118254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.118422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.118431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.118703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.118710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.119119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.119129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.119324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.119331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.119762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.119769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.120151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.120158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.120503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.120510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.120887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.120894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.121313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.121319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.121567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.121575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.121977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.121985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.122438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.122446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.122696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.122703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.123051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.123059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.123457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.123464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.123881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.123887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.124084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.124093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.864 qpair failed and we were unable to recover it. 00:29:37.864 [2024-07-15 14:00:04.124154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.864 [2024-07-15 14:00:04.124161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.124540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.124547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.124911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.124918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.124982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.124988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.125339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.125346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.125721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.125727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.126103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.126110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.126493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.126500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.126805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.126811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.127266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.127273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.127484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.127490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.127702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.127708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.128078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.128084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.128510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.128518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.128829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.128836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.129072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.129079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.129508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.129515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.129981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.129988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.130363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.130371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.130580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.130586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.130880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.130887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.131068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.131076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.131196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.131203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 Read completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Read completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Read completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Read completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Read completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Read completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Read completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Read completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Read completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Read completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Write completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Write completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Read completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Write completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Write completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Write completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Write completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Write completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Write completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Read completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Write completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Write completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Write completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Read completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Read completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Write completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Write completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Read completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Read completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Write completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Read completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 Write completed with error (sct=0, sc=8) 00:29:37.865 starting I/O failed 00:29:37.865 [2024-07-15 14:00:04.131919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.865 [2024-07-15 14:00:04.132528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.132617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.133087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.133135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 [2024-07-15 14:00:04.133558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:37.865 [2024-07-15 14:00:04.133646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fab04000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:37.865 [2024-07-15 14:00:04.133954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.133963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:37.865 [2024-07-15 14:00:04.134465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.134493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.865 qpair failed and we were unable to recover it. 00:29:37.865 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:37.865 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.865 [2024-07-15 14:00:04.134855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.865 [2024-07-15 14:00:04.134864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.135369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.135397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.135667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.135676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.135980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.135989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.136384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.136391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.136616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.136623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.136689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.136698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.136914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.136921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.137241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.137249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.137532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.137539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.137977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.137984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.138179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.138188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.138609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.138616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.139041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.139049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.139273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.139281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.139570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.139576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.139645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.139653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.139894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.139900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.140134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.140141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.140399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.140406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.140691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.140698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.140967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.140974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.141230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.141237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.141401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.141408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.141781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.141789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.142183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.142190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.142579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.142586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.143007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.143015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.143460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.143468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.143718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.143727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.144142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.144150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.144249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.144255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.144459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.144466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.866 qpair failed and we were unable to recover it. 00:29:37.866 [2024-07-15 14:00:04.144850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.866 [2024-07-15 14:00:04.144857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.145104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.145111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.145509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.145517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.145895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.145902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.146316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.146324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.146703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.146711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.147083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.147090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.147394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.147403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.147697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.147705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.148081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.148089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.148305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.148313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.148612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.148619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.148879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.148886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.149126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.149133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.149504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.149511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.149889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.149897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.150191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.150198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.150580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.150589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.151006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.151014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.151406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.151413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.151787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.151793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.152211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.152219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.152645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.152652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.153089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.153096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.153342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.153349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.153527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.153535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.153918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.153925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.154297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.154305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.154765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.154773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.155160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.155167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.155549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.155555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.155765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.155774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.156157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.156165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.156602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.156609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.156898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.156905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.157193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.157201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.157299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.157308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.157570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.157576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.157992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.157999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.158372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.867 [2024-07-15 14:00:04.158380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.867 qpair failed and we were unable to recover it. 00:29:37.867 [2024-07-15 14:00:04.158849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.158856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.159231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.159238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.159510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.159517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.159943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.159950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.160218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.160226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.160406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.160413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.160831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.160839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.161090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.161097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.161378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.161387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.161760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.161767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.161971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.161978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.162375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.162382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.162673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.162680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.163052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.163059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.163262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.163269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.163682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.163689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.164104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.164111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.164360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.164367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.164662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.164669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.165041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.165049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.165152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.165158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faafc000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.165251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79af20 is same with the state(5) to be set 00:29:37.868 [2024-07-15 14:00:04.165837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.165924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaf4000b90 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.166231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.166254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.166714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.166724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.167106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.167116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.167637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.167675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.168102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.168113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.168607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.168645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.168790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.168804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.169347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.169384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.169817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.169829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.170366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.170404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.170853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.170865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.171372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.171409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.171863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.171876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 [2024-07-15 14:00:04.172379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.172416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.868 [2024-07-15 14:00:04.172844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.172859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.868 qpair failed and we were unable to recover it. 00:29:37.868 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:37.868 [2024-07-15 14:00:04.173242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.868 [2024-07-15 14:00:04.173254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.869 [2024-07-15 14:00:04.173657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.869 [2024-07-15 14:00:04.173668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.174039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.174050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.174355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.174365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.174655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.174667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.174889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.174901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.174995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.175005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.175415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.175425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.175800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.175810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.176214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.176224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.176442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.176451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.176826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.176835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.177232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.177242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.177576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.177586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.177799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.177808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.178091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.178101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.178476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.178486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.178932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.178941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.179248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.179258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.179651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.179661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.180035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.180045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.180457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.180467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.180852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.180862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.181279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.181289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.181677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.181686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.182059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.182068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.182356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.182365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.182734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.182743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.183221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.183231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.183613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.183623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.183993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.184003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.184262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.184272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.184729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.184739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.185114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.185127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.869 [2024-07-15 14:00:04.185543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.869 [2024-07-15 14:00:04.185552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.869 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.185968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.185978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.186267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.186278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.186655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.186664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.187080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.187090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.187301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.187311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.187703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.187713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 Malloc0 00:29:37.870 [2024-07-15 14:00:04.188175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.188185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.188574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.188583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.870 [2024-07-15 14:00:04.188962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.188972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:37.870 [2024-07-15 14:00:04.189196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.189206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.870 [2024-07-15 14:00:04.189490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.189499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.870 [2024-07-15 14:00:04.189912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.189922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.190072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.190082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.190458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.190468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.190850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.190860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.191308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.191318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.191596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.191605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.192018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.192028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.192431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.192441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.192813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.192822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.192973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.192982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.193373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.193383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.193777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.193786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.194160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.194171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.194581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.194591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.194806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.194815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.195182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.195192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.195488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.195497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.195587] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.870 [2024-07-15 14:00:04.195912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.195924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.196298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.196308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.196706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.196717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.197104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.197114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.197516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.197526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.197946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.197957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.198347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.198357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.870 [2024-07-15 14:00:04.198721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.870 [2024-07-15 14:00:04.198731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.870 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.199114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.199129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.199631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.199640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.200013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.200022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.200248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.200259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.200541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.200550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.200930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.200939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.201325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.201335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.201722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.201731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.202056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.202065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.202262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.202272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.202557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.202568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.202990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.203000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.203396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.203406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.203790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.203799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.204256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.204266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.871 [2024-07-15 14:00:04.204655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.204665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:37.871 [2024-07-15 14:00:04.205081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.205091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.871 [2024-07-15 14:00:04.205485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.871 [2024-07-15 14:00:04.205495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.205784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.205793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.206082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.206099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.206502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.206513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.206805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.206814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.206971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.206981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.207333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.207343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.207771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.207780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.208188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.208198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.208583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.208592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.208881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.208890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.209264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.209274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.209631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.209641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.209845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.209854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.871 qpair failed and we were unable to recover it. 00:29:37.871 [2024-07-15 14:00:04.210203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.871 [2024-07-15 14:00:04.210215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.210633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.210643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.210945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.210955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.211170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.211179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.211579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.211589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.211968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.211978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.212208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.212217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.212597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.212606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.212827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.212836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.213227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.213237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.213645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.213655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.214080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.214090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.214549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.214558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.214958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.214968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.215351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.215361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.215576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.215585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.215955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.215964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.216259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.216269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.872 [2024-07-15 14:00:04.216677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.216687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.216897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.216910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:37.872 [2024-07-15 14:00:04.217278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.217289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.872 [2024-07-15 14:00:04.217570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.872 [2024-07-15 14:00:04.217580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.218018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.218027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.218465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.218475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.218882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.218891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.219097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.219108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.219512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.219522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.219813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.219822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.219901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.219910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.220273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.220283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.220678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.220687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.221097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.221107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.872 [2024-07-15 14:00:04.221505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.872 [2024-07-15 14:00:04.221515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.872 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.221842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.221851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.222250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.222260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.222539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.222548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.222951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.222960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.223153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.223163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.223376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.223386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.223695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.223707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.224060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.224069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.224576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.224585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.224781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.224790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.225201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.225211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.225600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.225609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.226002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.226012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.226406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.226416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.226827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.226837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.227208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.227218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.227613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.227623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.228007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.228017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.228284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.228294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.228694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.873 [2024-07-15 14:00:04.228703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:37.873 [2024-07-15 14:00:04.229098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.229109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.873 [2024-07-15 14:00:04.229423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.229434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.873 [2024-07-15 14:00:04.229637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.229646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.229985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.229995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.230374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.230384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.230753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.230763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.231214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.231224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.231739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.231749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.232001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.232011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.232316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.232326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.232555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.232565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.232968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.232977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.233206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.233216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.233636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.233646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.234013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.234022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.234431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.234441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.234838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.234847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.235261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.235270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.235487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.873 [2024-07-15 14:00:04.235497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.873 qpair failed and we were unable to recover it. 00:29:37.873 [2024-07-15 14:00:04.235835] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.874 [2024-07-15 14:00:04.235886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.874 [2024-07-15 14:00:04.235895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x78d220 with addr=10.0.0.2, port=4420 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.874 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:37.874 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.874 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.874 [2024-07-15 14:00:04.246337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.874 [2024-07-15 14:00:04.246445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.874 [2024-07-15 14:00:04.246464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.874 [2024-07-15 14:00:04.246471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.874 [2024-07-15 14:00:04.246478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:37.874 [2024-07-15 14:00:04.246496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.874 14:00:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1278174 00:29:37.874 [2024-07-15 14:00:04.256363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.874 [2024-07-15 14:00:04.256446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.874 [2024-07-15 14:00:04.256463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.874 [2024-07-15 14:00:04.256470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.874 [2024-07-15 14:00:04.256477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:37.874 [2024-07-15 14:00:04.256492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-07-15 14:00:04.266391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.874 [2024-07-15 14:00:04.266476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.874 [2024-07-15 14:00:04.266492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.874 [2024-07-15 14:00:04.266499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.874 [2024-07-15 14:00:04.266505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:37.874 [2024-07-15 14:00:04.266519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-07-15 14:00:04.276383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.874 [2024-07-15 14:00:04.276469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.874 [2024-07-15 14:00:04.276485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.874 [2024-07-15 14:00:04.276492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.874 [2024-07-15 14:00:04.276498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:37.874 [2024-07-15 14:00:04.276511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-07-15 14:00:04.286439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.874 [2024-07-15 14:00:04.286547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.874 [2024-07-15 14:00:04.286563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.874 [2024-07-15 14:00:04.286570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.874 [2024-07-15 14:00:04.286576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:37.874 [2024-07-15 14:00:04.286590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-07-15 14:00:04.296386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.874 [2024-07-15 14:00:04.296465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.874 [2024-07-15 14:00:04.296484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.874 [2024-07-15 14:00:04.296492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.874 [2024-07-15 14:00:04.296498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:37.874 [2024-07-15 14:00:04.296512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-07-15 14:00:04.306437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.874 [2024-07-15 14:00:04.306514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.874 [2024-07-15 14:00:04.306530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.874 [2024-07-15 14:00:04.306537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.874 [2024-07-15 14:00:04.306543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:37.874 [2024-07-15 14:00:04.306557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-07-15 14:00:04.316444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.874 [2024-07-15 14:00:04.316525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.874 [2024-07-15 14:00:04.316541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.874 [2024-07-15 14:00:04.316549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.874 [2024-07-15 14:00:04.316555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:37.874 [2024-07-15 14:00:04.316569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-07-15 14:00:04.326474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.874 [2024-07-15 14:00:04.326561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.874 [2024-07-15 14:00:04.326577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.874 [2024-07-15 14:00:04.326584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.874 [2024-07-15 14:00:04.326590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:37.874 [2024-07-15 14:00:04.326604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-07-15 14:00:04.336493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.874 [2024-07-15 14:00:04.336572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.874 [2024-07-15 14:00:04.336588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.874 [2024-07-15 14:00:04.336595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.874 [2024-07-15 14:00:04.336601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:37.874 [2024-07-15 14:00:04.336618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.874 qpair failed and we were unable to recover it. 00:29:37.874 [2024-07-15 14:00:04.346539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.874 [2024-07-15 14:00:04.346668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.875 [2024-07-15 14:00:04.346684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.875 [2024-07-15 14:00:04.346691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.875 [2024-07-15 14:00:04.346697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:37.875 [2024-07-15 14:00:04.346711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-07-15 14:00:04.356611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.875 [2024-07-15 14:00:04.356723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.875 [2024-07-15 14:00:04.356739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.875 [2024-07-15 14:00:04.356745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.875 [2024-07-15 14:00:04.356751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:37.875 [2024-07-15 14:00:04.356765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-07-15 14:00:04.366592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.875 [2024-07-15 14:00:04.366683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.875 [2024-07-15 14:00:04.366709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.875 [2024-07-15 14:00:04.366717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.875 [2024-07-15 14:00:04.366724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:37.875 [2024-07-15 14:00:04.366743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.875 qpair failed and we were unable to recover it. 00:29:37.875 [2024-07-15 14:00:04.376529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.875 [2024-07-15 14:00:04.376627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.875 [2024-07-15 14:00:04.376645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.875 [2024-07-15 14:00:04.376652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.875 [2024-07-15 14:00:04.376658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:37.875 [2024-07-15 14:00:04.376673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.875 qpair failed and we were unable to recover it. 00:29:38.137 [2024-07-15 14:00:04.386704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.137 [2024-07-15 14:00:04.386782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.137 [2024-07-15 14:00:04.386803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.137 [2024-07-15 14:00:04.386810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.137 [2024-07-15 14:00:04.386816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.137 [2024-07-15 14:00:04.386830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.137 qpair failed and we were unable to recover it. 00:29:38.137 [2024-07-15 14:00:04.396655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.137 [2024-07-15 14:00:04.396735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.137 [2024-07-15 14:00:04.396751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.137 [2024-07-15 14:00:04.396758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.137 [2024-07-15 14:00:04.396764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.137 [2024-07-15 14:00:04.396778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.137 qpair failed and we were unable to recover it. 00:29:38.137 [2024-07-15 14:00:04.406745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.137 [2024-07-15 14:00:04.406836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.137 [2024-07-15 14:00:04.406862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.137 [2024-07-15 14:00:04.406871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.137 [2024-07-15 14:00:04.406877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.137 [2024-07-15 14:00:04.406896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.137 qpair failed and we were unable to recover it. 00:29:38.137 [2024-07-15 14:00:04.416681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.137 [2024-07-15 14:00:04.416769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.137 [2024-07-15 14:00:04.416794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.137 [2024-07-15 14:00:04.416803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.137 [2024-07-15 14:00:04.416810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.137 [2024-07-15 14:00:04.416828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.137 qpair failed and we were unable to recover it. 00:29:38.137 [2024-07-15 14:00:04.426780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.137 [2024-07-15 14:00:04.426864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.137 [2024-07-15 14:00:04.426882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.137 [2024-07-15 14:00:04.426889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.137 [2024-07-15 14:00:04.426896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.137 [2024-07-15 14:00:04.426916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.137 qpair failed and we were unable to recover it. 00:29:38.137 [2024-07-15 14:00:04.436781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.137 [2024-07-15 14:00:04.436871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.137 [2024-07-15 14:00:04.436896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.138 [2024-07-15 14:00:04.436904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.138 [2024-07-15 14:00:04.436911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.138 [2024-07-15 14:00:04.436929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.138 qpair failed and we were unable to recover it. 00:29:38.138 [2024-07-15 14:00:04.446830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.138 [2024-07-15 14:00:04.446928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.138 [2024-07-15 14:00:04.446953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.138 [2024-07-15 14:00:04.446962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.138 [2024-07-15 14:00:04.446969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.138 [2024-07-15 14:00:04.446987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.138 qpair failed and we were unable to recover it. 00:29:38.138 [2024-07-15 14:00:04.456838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.138 [2024-07-15 14:00:04.456923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.138 [2024-07-15 14:00:04.456941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.138 [2024-07-15 14:00:04.456948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.138 [2024-07-15 14:00:04.456955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.138 [2024-07-15 14:00:04.456970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.138 qpair failed and we were unable to recover it. 00:29:38.138 [2024-07-15 14:00:04.466876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.138 [2024-07-15 14:00:04.466953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.138 [2024-07-15 14:00:04.466971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.138 [2024-07-15 14:00:04.466978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.138 [2024-07-15 14:00:04.466984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.138 [2024-07-15 14:00:04.466999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.138 qpair failed and we were unable to recover it. 00:29:38.138 [2024-07-15 14:00:04.476876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.138 [2024-07-15 14:00:04.476956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.138 [2024-07-15 14:00:04.476976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.138 [2024-07-15 14:00:04.476983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.138 [2024-07-15 14:00:04.476989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.138 [2024-07-15 14:00:04.477003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.138 qpair failed and we were unable to recover it. 00:29:38.138 [2024-07-15 14:00:04.486925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.138 [2024-07-15 14:00:04.487009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.138 [2024-07-15 14:00:04.487025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.138 [2024-07-15 14:00:04.487032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.138 [2024-07-15 14:00:04.487038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.138 [2024-07-15 14:00:04.487052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.138 qpair failed and we were unable to recover it. 00:29:38.138 [2024-07-15 14:00:04.496970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.138 [2024-07-15 14:00:04.497056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.138 [2024-07-15 14:00:04.497072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.138 [2024-07-15 14:00:04.497079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.138 [2024-07-15 14:00:04.497085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.138 [2024-07-15 14:00:04.497099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.138 qpair failed and we were unable to recover it. 00:29:38.138 [2024-07-15 14:00:04.507002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.138 [2024-07-15 14:00:04.507082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.138 [2024-07-15 14:00:04.507098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.138 [2024-07-15 14:00:04.507105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.138 [2024-07-15 14:00:04.507112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.138 [2024-07-15 14:00:04.507131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.138 qpair failed and we were unable to recover it. 00:29:38.138 [2024-07-15 14:00:04.517141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.138 [2024-07-15 14:00:04.517233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.138 [2024-07-15 14:00:04.517249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.138 [2024-07-15 14:00:04.517256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.138 [2024-07-15 14:00:04.517269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.138 [2024-07-15 14:00:04.517283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.138 qpair failed and we were unable to recover it. 00:29:38.138 [2024-07-15 14:00:04.527116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.138 [2024-07-15 14:00:04.527204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.138 [2024-07-15 14:00:04.527220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.138 [2024-07-15 14:00:04.527228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.138 [2024-07-15 14:00:04.527234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.138 [2024-07-15 14:00:04.527248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.138 qpair failed and we were unable to recover it. 00:29:38.138 [2024-07-15 14:00:04.537059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.138 [2024-07-15 14:00:04.537142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.138 [2024-07-15 14:00:04.537158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.138 [2024-07-15 14:00:04.537165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.138 [2024-07-15 14:00:04.537171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.138 [2024-07-15 14:00:04.537185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.138 qpair failed and we were unable to recover it. 00:29:38.138 [2024-07-15 14:00:04.547143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.138 [2024-07-15 14:00:04.547230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.138 [2024-07-15 14:00:04.547246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.138 [2024-07-15 14:00:04.547254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.138 [2024-07-15 14:00:04.547260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.138 [2024-07-15 14:00:04.547274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.138 qpair failed and we were unable to recover it. 00:29:38.138 [2024-07-15 14:00:04.557111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.138 [2024-07-15 14:00:04.557197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.138 [2024-07-15 14:00:04.557213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.138 [2024-07-15 14:00:04.557220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.138 [2024-07-15 14:00:04.557226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.138 [2024-07-15 14:00:04.557240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.138 qpair failed and we were unable to recover it. 00:29:38.138 [2024-07-15 14:00:04.567044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.138 [2024-07-15 14:00:04.567146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.138 [2024-07-15 14:00:04.567162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.138 [2024-07-15 14:00:04.567169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.139 [2024-07-15 14:00:04.567176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.139 [2024-07-15 14:00:04.567189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.139 qpair failed and we were unable to recover it. 00:29:38.139 [2024-07-15 14:00:04.577153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.139 [2024-07-15 14:00:04.577234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.139 [2024-07-15 14:00:04.577250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.139 [2024-07-15 14:00:04.577257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.139 [2024-07-15 14:00:04.577263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.139 [2024-07-15 14:00:04.577279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.139 qpair failed and we were unable to recover it. 00:29:38.139 [2024-07-15 14:00:04.587193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.139 [2024-07-15 14:00:04.587272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.139 [2024-07-15 14:00:04.587288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.139 [2024-07-15 14:00:04.587295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.139 [2024-07-15 14:00:04.587301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.139 [2024-07-15 14:00:04.587315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.139 qpair failed and we were unable to recover it. 00:29:38.139 [2024-07-15 14:00:04.597218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.139 [2024-07-15 14:00:04.597301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.139 [2024-07-15 14:00:04.597316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.139 [2024-07-15 14:00:04.597323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.139 [2024-07-15 14:00:04.597329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.139 [2024-07-15 14:00:04.597343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.139 qpair failed and we were unable to recover it. 00:29:38.139 [2024-07-15 14:00:04.607270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.139 [2024-07-15 14:00:04.607358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.139 [2024-07-15 14:00:04.607374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.139 [2024-07-15 14:00:04.607381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.139 [2024-07-15 14:00:04.607391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.139 [2024-07-15 14:00:04.607405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.139 qpair failed and we were unable to recover it. 00:29:38.139 [2024-07-15 14:00:04.617226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.139 [2024-07-15 14:00:04.617310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.139 [2024-07-15 14:00:04.617327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.139 [2024-07-15 14:00:04.617334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.139 [2024-07-15 14:00:04.617340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.139 [2024-07-15 14:00:04.617354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.139 qpair failed and we were unable to recover it. 00:29:38.139 [2024-07-15 14:00:04.627318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.139 [2024-07-15 14:00:04.627395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.139 [2024-07-15 14:00:04.627411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.139 [2024-07-15 14:00:04.627418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.139 [2024-07-15 14:00:04.627424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.139 [2024-07-15 14:00:04.627438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.139 qpair failed and we were unable to recover it. 00:29:38.139 [2024-07-15 14:00:04.637360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.139 [2024-07-15 14:00:04.637438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.139 [2024-07-15 14:00:04.637454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.139 [2024-07-15 14:00:04.637461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.139 [2024-07-15 14:00:04.637467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.139 [2024-07-15 14:00:04.637481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.139 qpair failed and we were unable to recover it. 00:29:38.139 [2024-07-15 14:00:04.647302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.139 [2024-07-15 14:00:04.647389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.139 [2024-07-15 14:00:04.647405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.139 [2024-07-15 14:00:04.647412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.139 [2024-07-15 14:00:04.647418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.139 [2024-07-15 14:00:04.647432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.139 qpair failed and we were unable to recover it. 00:29:38.139 [2024-07-15 14:00:04.657411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.139 [2024-07-15 14:00:04.657503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.139 [2024-07-15 14:00:04.657519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.139 [2024-07-15 14:00:04.657526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.139 [2024-07-15 14:00:04.657532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.139 [2024-07-15 14:00:04.657546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.139 qpair failed and we were unable to recover it. 00:29:38.401 [2024-07-15 14:00:04.667452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.401 [2024-07-15 14:00:04.667532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.401 [2024-07-15 14:00:04.667549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.401 [2024-07-15 14:00:04.667556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.401 [2024-07-15 14:00:04.667562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.401 [2024-07-15 14:00:04.667577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.401 qpair failed and we were unable to recover it. 00:29:38.401 [2024-07-15 14:00:04.677464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.401 [2024-07-15 14:00:04.677546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.401 [2024-07-15 14:00:04.677562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.401 [2024-07-15 14:00:04.677569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.401 [2024-07-15 14:00:04.677575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.401 [2024-07-15 14:00:04.677589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.401 qpair failed and we were unable to recover it. 00:29:38.401 [2024-07-15 14:00:04.687442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.401 [2024-07-15 14:00:04.687523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.401 [2024-07-15 14:00:04.687539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.401 [2024-07-15 14:00:04.687546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.401 [2024-07-15 14:00:04.687552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.401 [2024-07-15 14:00:04.687566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.401 qpair failed and we were unable to recover it. 00:29:38.401 [2024-07-15 14:00:04.697429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.401 [2024-07-15 14:00:04.697534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.401 [2024-07-15 14:00:04.697550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.401 [2024-07-15 14:00:04.697557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.401 [2024-07-15 14:00:04.697567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.401 [2024-07-15 14:00:04.697581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.401 qpair failed and we were unable to recover it. 00:29:38.401 [2024-07-15 14:00:04.707518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.401 [2024-07-15 14:00:04.707595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.401 [2024-07-15 14:00:04.707611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.401 [2024-07-15 14:00:04.707618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.401 [2024-07-15 14:00:04.707624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.401 [2024-07-15 14:00:04.707638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.401 qpair failed and we were unable to recover it. 00:29:38.401 [2024-07-15 14:00:04.717583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.401 [2024-07-15 14:00:04.717664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.401 [2024-07-15 14:00:04.717680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.401 [2024-07-15 14:00:04.717687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.401 [2024-07-15 14:00:04.717693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.401 [2024-07-15 14:00:04.717707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.401 qpair failed and we were unable to recover it. 00:29:38.401 [2024-07-15 14:00:04.727569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.401 [2024-07-15 14:00:04.727653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.401 [2024-07-15 14:00:04.727670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.401 [2024-07-15 14:00:04.727677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.401 [2024-07-15 14:00:04.727682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.401 [2024-07-15 14:00:04.727696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.401 qpair failed and we were unable to recover it. 00:29:38.401 [2024-07-15 14:00:04.737597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.401 [2024-07-15 14:00:04.737675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.401 [2024-07-15 14:00:04.737691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.401 [2024-07-15 14:00:04.737698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.401 [2024-07-15 14:00:04.737704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.401 [2024-07-15 14:00:04.737718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.401 qpair failed and we were unable to recover it. 00:29:38.401 [2024-07-15 14:00:04.747683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.401 [2024-07-15 14:00:04.747764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.401 [2024-07-15 14:00:04.747785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.401 [2024-07-15 14:00:04.747792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.401 [2024-07-15 14:00:04.747798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.401 [2024-07-15 14:00:04.747813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.401 qpair failed and we were unable to recover it. 00:29:38.401 [2024-07-15 14:00:04.757662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.401 [2024-07-15 14:00:04.757749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.401 [2024-07-15 14:00:04.757775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.401 [2024-07-15 14:00:04.757784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.401 [2024-07-15 14:00:04.757791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.401 [2024-07-15 14:00:04.757809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.401 qpair failed and we were unable to recover it. 00:29:38.401 [2024-07-15 14:00:04.767686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.401 [2024-07-15 14:00:04.767777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.401 [2024-07-15 14:00:04.767802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.401 [2024-07-15 14:00:04.767810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.401 [2024-07-15 14:00:04.767817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.401 [2024-07-15 14:00:04.767836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.401 qpair failed and we were unable to recover it. 00:29:38.401 [2024-07-15 14:00:04.777667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.401 [2024-07-15 14:00:04.777752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.401 [2024-07-15 14:00:04.777778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.402 [2024-07-15 14:00:04.777787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.402 [2024-07-15 14:00:04.777793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.402 [2024-07-15 14:00:04.777812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.402 qpair failed and we were unable to recover it. 00:29:38.402 [2024-07-15 14:00:04.787785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.402 [2024-07-15 14:00:04.787868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.402 [2024-07-15 14:00:04.787885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.402 [2024-07-15 14:00:04.787897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.402 [2024-07-15 14:00:04.787903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.402 [2024-07-15 14:00:04.787918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.402 qpair failed and we were unable to recover it. 00:29:38.402 [2024-07-15 14:00:04.797734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.402 [2024-07-15 14:00:04.797820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.402 [2024-07-15 14:00:04.797846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.402 [2024-07-15 14:00:04.797854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.402 [2024-07-15 14:00:04.797861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.402 [2024-07-15 14:00:04.797880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.402 qpair failed and we were unable to recover it. 00:29:38.402 [2024-07-15 14:00:04.807833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.402 [2024-07-15 14:00:04.807932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.402 [2024-07-15 14:00:04.807958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.402 [2024-07-15 14:00:04.807966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.402 [2024-07-15 14:00:04.807973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.402 [2024-07-15 14:00:04.807991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.402 qpair failed and we were unable to recover it. 00:29:38.402 [2024-07-15 14:00:04.817868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.402 [2024-07-15 14:00:04.817949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.402 [2024-07-15 14:00:04.817966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.402 [2024-07-15 14:00:04.817973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.402 [2024-07-15 14:00:04.817979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.402 [2024-07-15 14:00:04.817994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.402 qpair failed and we were unable to recover it. 00:29:38.402 [2024-07-15 14:00:04.827904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.402 [2024-07-15 14:00:04.827979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.402 [2024-07-15 14:00:04.827995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.402 [2024-07-15 14:00:04.828002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.402 [2024-07-15 14:00:04.828008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.402 [2024-07-15 14:00:04.828023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.402 qpair failed and we were unable to recover it. 00:29:38.402 [2024-07-15 14:00:04.837878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.402 [2024-07-15 14:00:04.837960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.402 [2024-07-15 14:00:04.837976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.402 [2024-07-15 14:00:04.837983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.402 [2024-07-15 14:00:04.837989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.402 [2024-07-15 14:00:04.838004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.402 qpair failed and we were unable to recover it. 00:29:38.402 [2024-07-15 14:00:04.847814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.402 [2024-07-15 14:00:04.847897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.402 [2024-07-15 14:00:04.847912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.402 [2024-07-15 14:00:04.847919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.402 [2024-07-15 14:00:04.847925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.402 [2024-07-15 14:00:04.847939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.402 qpair failed and we were unable to recover it. 00:29:38.402 [2024-07-15 14:00:04.857926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.402 [2024-07-15 14:00:04.858004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.402 [2024-07-15 14:00:04.858020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.402 [2024-07-15 14:00:04.858027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.402 [2024-07-15 14:00:04.858033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.402 [2024-07-15 14:00:04.858047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.402 qpair failed and we were unable to recover it. 00:29:38.402 [2024-07-15 14:00:04.867952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.402 [2024-07-15 14:00:04.868030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.402 [2024-07-15 14:00:04.868045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.402 [2024-07-15 14:00:04.868052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.402 [2024-07-15 14:00:04.868059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.402 [2024-07-15 14:00:04.868073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.402 qpair failed and we were unable to recover it. 00:29:38.402 [2024-07-15 14:00:04.877895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.402 [2024-07-15 14:00:04.877974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.402 [2024-07-15 14:00:04.877991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.402 [2024-07-15 14:00:04.878002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.402 [2024-07-15 14:00:04.878008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.402 [2024-07-15 14:00:04.878023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.402 qpair failed and we were unable to recover it. 00:29:38.402 [2024-07-15 14:00:04.888006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.402 [2024-07-15 14:00:04.888092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.402 [2024-07-15 14:00:04.888109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.402 [2024-07-15 14:00:04.888117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.402 [2024-07-15 14:00:04.888128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.402 [2024-07-15 14:00:04.888142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.402 qpair failed and we were unable to recover it. 00:29:38.402 [2024-07-15 14:00:04.898035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.402 [2024-07-15 14:00:04.898114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.402 [2024-07-15 14:00:04.898135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.402 [2024-07-15 14:00:04.898142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.402 [2024-07-15 14:00:04.898148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.402 [2024-07-15 14:00:04.898162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.402 qpair failed and we were unable to recover it. 00:29:38.402 [2024-07-15 14:00:04.908160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.402 [2024-07-15 14:00:04.908273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.402 [2024-07-15 14:00:04.908289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.402 [2024-07-15 14:00:04.908296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.402 [2024-07-15 14:00:04.908302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.402 [2024-07-15 14:00:04.908316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.402 qpair failed and we were unable to recover it. 00:29:38.402 [2024-07-15 14:00:04.918170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.402 [2024-07-15 14:00:04.918254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.402 [2024-07-15 14:00:04.918270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.402 [2024-07-15 14:00:04.918277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.402 [2024-07-15 14:00:04.918284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.403 [2024-07-15 14:00:04.918298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.403 qpair failed and we were unable to recover it. 00:29:38.664 [2024-07-15 14:00:04.928088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.664 [2024-07-15 14:00:04.928175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.664 [2024-07-15 14:00:04.928191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.664 [2024-07-15 14:00:04.928198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.664 [2024-07-15 14:00:04.928204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.664 [2024-07-15 14:00:04.928218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.664 qpair failed and we were unable to recover it. 00:29:38.664 [2024-07-15 14:00:04.938161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.664 [2024-07-15 14:00:04.938241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.664 [2024-07-15 14:00:04.938257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.664 [2024-07-15 14:00:04.938264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.664 [2024-07-15 14:00:04.938270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.664 [2024-07-15 14:00:04.938284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.664 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:04.948283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:04.948396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:04.948413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.665 [2024-07-15 14:00:04.948419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.665 [2024-07-15 14:00:04.948425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.665 [2024-07-15 14:00:04.948440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.665 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:04.958183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:04.958296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:04.958312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.665 [2024-07-15 14:00:04.958319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.665 [2024-07-15 14:00:04.958325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.665 [2024-07-15 14:00:04.958340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.665 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:04.968265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:04.968352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:04.968368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.665 [2024-07-15 14:00:04.968378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.665 [2024-07-15 14:00:04.968384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.665 [2024-07-15 14:00:04.968398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.665 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:04.978304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:04.978382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:04.978398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.665 [2024-07-15 14:00:04.978405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.665 [2024-07-15 14:00:04.978410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.665 [2024-07-15 14:00:04.978425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.665 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:04.988313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:04.988400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:04.988416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.665 [2024-07-15 14:00:04.988422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.665 [2024-07-15 14:00:04.988428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.665 [2024-07-15 14:00:04.988442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.665 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:04.998386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:04.998472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:04.998487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.665 [2024-07-15 14:00:04.998495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.665 [2024-07-15 14:00:04.998501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.665 [2024-07-15 14:00:04.998515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.665 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:05.008339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:05.008434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:05.008450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.665 [2024-07-15 14:00:05.008457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.665 [2024-07-15 14:00:05.008464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.665 [2024-07-15 14:00:05.008479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.665 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:05.018415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:05.018493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:05.018509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.665 [2024-07-15 14:00:05.018517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.665 [2024-07-15 14:00:05.018523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.665 [2024-07-15 14:00:05.018537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.665 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:05.028376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:05.028460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:05.028476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.665 [2024-07-15 14:00:05.028482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.665 [2024-07-15 14:00:05.028489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.665 [2024-07-15 14:00:05.028502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.665 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:05.038479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:05.038564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:05.038580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.665 [2024-07-15 14:00:05.038587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.665 [2024-07-15 14:00:05.038592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.665 [2024-07-15 14:00:05.038606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.665 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:05.048460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:05.048551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:05.048567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.665 [2024-07-15 14:00:05.048573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.665 [2024-07-15 14:00:05.048579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.665 [2024-07-15 14:00:05.048593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.665 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:05.058523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:05.058598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:05.058617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.665 [2024-07-15 14:00:05.058624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.665 [2024-07-15 14:00:05.058630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.665 [2024-07-15 14:00:05.058644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.665 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:05.068507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:05.068581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:05.068596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.665 [2024-07-15 14:00:05.068603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.665 [2024-07-15 14:00:05.068609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.665 [2024-07-15 14:00:05.068623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.665 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:05.078541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:05.078645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:05.078660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.665 [2024-07-15 14:00:05.078668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.665 [2024-07-15 14:00:05.078674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.665 [2024-07-15 14:00:05.078688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.665 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:05.088600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:05.088687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:05.088704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.665 [2024-07-15 14:00:05.088712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.665 [2024-07-15 14:00:05.088718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.665 [2024-07-15 14:00:05.088733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.665 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:05.098628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:05.098703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:05.098719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.665 [2024-07-15 14:00:05.098725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.665 [2024-07-15 14:00:05.098731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.665 [2024-07-15 14:00:05.098745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.665 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:05.108674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:05.108753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:05.108769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.665 [2024-07-15 14:00:05.108776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.665 [2024-07-15 14:00:05.108782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.665 [2024-07-15 14:00:05.108795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.665 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:05.118703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:05.118793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:05.118808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.665 [2024-07-15 14:00:05.118814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.665 [2024-07-15 14:00:05.118820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.665 [2024-07-15 14:00:05.118833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.665 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:05.128722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:05.128898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:05.128926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.665 [2024-07-15 14:00:05.128935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.665 [2024-07-15 14:00:05.128942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.665 [2024-07-15 14:00:05.128960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.665 qpair failed and we were unable to recover it. 00:29:38.665 [2024-07-15 14:00:05.138811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.665 [2024-07-15 14:00:05.138894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.665 [2024-07-15 14:00:05.138919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.666 [2024-07-15 14:00:05.138927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.666 [2024-07-15 14:00:05.138934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.666 [2024-07-15 14:00:05.138953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.666 qpair failed and we were unable to recover it. 00:29:38.666 [2024-07-15 14:00:05.148768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.666 [2024-07-15 14:00:05.148892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.666 [2024-07-15 14:00:05.148922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.666 [2024-07-15 14:00:05.148930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.666 [2024-07-15 14:00:05.148937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.666 [2024-07-15 14:00:05.148955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.666 qpair failed and we were unable to recover it. 00:29:38.666 [2024-07-15 14:00:05.158788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.666 [2024-07-15 14:00:05.158869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.666 [2024-07-15 14:00:05.158887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.666 [2024-07-15 14:00:05.158894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.666 [2024-07-15 14:00:05.158900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.666 [2024-07-15 14:00:05.158915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.666 qpair failed and we were unable to recover it. 00:29:38.666 [2024-07-15 14:00:05.168827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.666 [2024-07-15 14:00:05.168909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.666 [2024-07-15 14:00:05.168925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.666 [2024-07-15 14:00:05.168932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.666 [2024-07-15 14:00:05.168938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.666 [2024-07-15 14:00:05.168952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.666 qpair failed and we were unable to recover it. 00:29:38.666 [2024-07-15 14:00:05.178810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.666 [2024-07-15 14:00:05.178894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.666 [2024-07-15 14:00:05.178910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.666 [2024-07-15 14:00:05.178917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.666 [2024-07-15 14:00:05.178923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.666 [2024-07-15 14:00:05.178937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.666 qpair failed and we were unable to recover it. 00:29:38.666 [2024-07-15 14:00:05.188871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.666 [2024-07-15 14:00:05.188958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.666 [2024-07-15 14:00:05.188984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.666 [2024-07-15 14:00:05.188992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.666 [2024-07-15 14:00:05.188999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.666 [2024-07-15 14:00:05.189023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.666 qpair failed and we were unable to recover it. 00:29:38.926 [2024-07-15 14:00:05.198790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.926 [2024-07-15 14:00:05.198874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.926 [2024-07-15 14:00:05.198891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.926 [2024-07-15 14:00:05.198899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.926 [2024-07-15 14:00:05.198905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.926 [2024-07-15 14:00:05.198920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-07-15 14:00:05.208972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.926 [2024-07-15 14:00:05.209096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.926 [2024-07-15 14:00:05.209112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.926 [2024-07-15 14:00:05.209119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.926 [2024-07-15 14:00:05.209129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.926 [2024-07-15 14:00:05.209144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-07-15 14:00:05.218881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.926 [2024-07-15 14:00:05.218964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.926 [2024-07-15 14:00:05.218980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.926 [2024-07-15 14:00:05.218987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.926 [2024-07-15 14:00:05.218993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.926 [2024-07-15 14:00:05.219007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-07-15 14:00:05.228978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.926 [2024-07-15 14:00:05.229069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.926 [2024-07-15 14:00:05.229085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.926 [2024-07-15 14:00:05.229092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.926 [2024-07-15 14:00:05.229098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.926 [2024-07-15 14:00:05.229112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-07-15 14:00:05.238980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.926 [2024-07-15 14:00:05.239066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.926 [2024-07-15 14:00:05.239085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.926 [2024-07-15 14:00:05.239092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.926 [2024-07-15 14:00:05.239098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.926 [2024-07-15 14:00:05.239112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-07-15 14:00:05.249016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.926 [2024-07-15 14:00:05.249098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.926 [2024-07-15 14:00:05.249114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.926 [2024-07-15 14:00:05.249125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.926 [2024-07-15 14:00:05.249132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.926 [2024-07-15 14:00:05.249146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-07-15 14:00:05.259062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.926 [2024-07-15 14:00:05.259142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.926 [2024-07-15 14:00:05.259158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.926 [2024-07-15 14:00:05.259165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.926 [2024-07-15 14:00:05.259171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.926 [2024-07-15 14:00:05.259185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-07-15 14:00:05.268975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.926 [2024-07-15 14:00:05.269055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.926 [2024-07-15 14:00:05.269071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.926 [2024-07-15 14:00:05.269078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.926 [2024-07-15 14:00:05.269085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.926 [2024-07-15 14:00:05.269099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-07-15 14:00:05.279143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.926 [2024-07-15 14:00:05.279264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.926 [2024-07-15 14:00:05.279280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.926 [2024-07-15 14:00:05.279287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.926 [2024-07-15 14:00:05.279293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.926 [2024-07-15 14:00:05.279315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-07-15 14:00:05.289157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.926 [2024-07-15 14:00:05.289240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.926 [2024-07-15 14:00:05.289257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.926 [2024-07-15 14:00:05.289264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.926 [2024-07-15 14:00:05.289270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.926 [2024-07-15 14:00:05.289285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-07-15 14:00:05.299196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.926 [2024-07-15 14:00:05.299279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.926 [2024-07-15 14:00:05.299295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.926 [2024-07-15 14:00:05.299302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.926 [2024-07-15 14:00:05.299308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.926 [2024-07-15 14:00:05.299322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-07-15 14:00:05.309115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.926 [2024-07-15 14:00:05.309198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.926 [2024-07-15 14:00:05.309214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.926 [2024-07-15 14:00:05.309221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.926 [2024-07-15 14:00:05.309227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.926 [2024-07-15 14:00:05.309241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-07-15 14:00:05.319224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.926 [2024-07-15 14:00:05.319303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.926 [2024-07-15 14:00:05.319319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.926 [2024-07-15 14:00:05.319326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.926 [2024-07-15 14:00:05.319332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.926 [2024-07-15 14:00:05.319346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-07-15 14:00:05.329283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.926 [2024-07-15 14:00:05.329409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.927 [2024-07-15 14:00:05.329428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.927 [2024-07-15 14:00:05.329435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.927 [2024-07-15 14:00:05.329441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.927 [2024-07-15 14:00:05.329455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-07-15 14:00:05.339269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.927 [2024-07-15 14:00:05.339345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.927 [2024-07-15 14:00:05.339361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.927 [2024-07-15 14:00:05.339368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.927 [2024-07-15 14:00:05.339374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.927 [2024-07-15 14:00:05.339388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-07-15 14:00:05.349367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.927 [2024-07-15 14:00:05.349447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.927 [2024-07-15 14:00:05.349463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.927 [2024-07-15 14:00:05.349470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.927 [2024-07-15 14:00:05.349476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.927 [2024-07-15 14:00:05.349490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-07-15 14:00:05.359346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.927 [2024-07-15 14:00:05.359425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.927 [2024-07-15 14:00:05.359441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.927 [2024-07-15 14:00:05.359447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.927 [2024-07-15 14:00:05.359453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.927 [2024-07-15 14:00:05.359467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-07-15 14:00:05.369381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.927 [2024-07-15 14:00:05.369462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.927 [2024-07-15 14:00:05.369478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.927 [2024-07-15 14:00:05.369485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.927 [2024-07-15 14:00:05.369491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.927 [2024-07-15 14:00:05.369508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-07-15 14:00:05.379513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.927 [2024-07-15 14:00:05.379591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.927 [2024-07-15 14:00:05.379607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.927 [2024-07-15 14:00:05.379614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.927 [2024-07-15 14:00:05.379620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.927 [2024-07-15 14:00:05.379634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-07-15 14:00:05.389461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.927 [2024-07-15 14:00:05.389539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.927 [2024-07-15 14:00:05.389554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.927 [2024-07-15 14:00:05.389561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.927 [2024-07-15 14:00:05.389567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.927 [2024-07-15 14:00:05.389581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-07-15 14:00:05.399461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.927 [2024-07-15 14:00:05.399539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.927 [2024-07-15 14:00:05.399555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.927 [2024-07-15 14:00:05.399562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.927 [2024-07-15 14:00:05.399568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.927 [2024-07-15 14:00:05.399582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-07-15 14:00:05.409475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.927 [2024-07-15 14:00:05.409559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.927 [2024-07-15 14:00:05.409575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.927 [2024-07-15 14:00:05.409581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.927 [2024-07-15 14:00:05.409587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.927 [2024-07-15 14:00:05.409601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-07-15 14:00:05.419533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.927 [2024-07-15 14:00:05.419638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.927 [2024-07-15 14:00:05.419658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.927 [2024-07-15 14:00:05.419665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.927 [2024-07-15 14:00:05.419671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.927 [2024-07-15 14:00:05.419685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-07-15 14:00:05.429532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.927 [2024-07-15 14:00:05.429610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.927 [2024-07-15 14:00:05.429625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.927 [2024-07-15 14:00:05.429632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.927 [2024-07-15 14:00:05.429638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.927 [2024-07-15 14:00:05.429652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-07-15 14:00:05.439673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.927 [2024-07-15 14:00:05.439755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.927 [2024-07-15 14:00:05.439771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.927 [2024-07-15 14:00:05.439778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.927 [2024-07-15 14:00:05.439783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.927 [2024-07-15 14:00:05.439797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-07-15 14:00:05.449597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.927 [2024-07-15 14:00:05.449678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.927 [2024-07-15 14:00:05.449695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.927 [2024-07-15 14:00:05.449702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.927 [2024-07-15 14:00:05.449707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:38.927 [2024-07-15 14:00:05.449722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:38.927 qpair failed and we were unable to recover it. 00:29:39.188 [2024-07-15 14:00:05.459700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.188 [2024-07-15 14:00:05.459787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.188 [2024-07-15 14:00:05.459803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.188 [2024-07-15 14:00:05.459810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.188 [2024-07-15 14:00:05.459820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.188 [2024-07-15 14:00:05.459835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.188 qpair failed and we were unable to recover it. 00:29:39.188 [2024-07-15 14:00:05.469656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.188 [2024-07-15 14:00:05.469766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.188 [2024-07-15 14:00:05.469791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.188 [2024-07-15 14:00:05.469800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.188 [2024-07-15 14:00:05.469806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.188 [2024-07-15 14:00:05.469825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.188 qpair failed and we were unable to recover it. 00:29:39.188 [2024-07-15 14:00:05.479701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.188 [2024-07-15 14:00:05.479786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.188 [2024-07-15 14:00:05.479812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.188 [2024-07-15 14:00:05.479820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.188 [2024-07-15 14:00:05.479827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.188 [2024-07-15 14:00:05.479845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.188 qpair failed and we were unable to recover it. 00:29:39.188 [2024-07-15 14:00:05.489707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.188 [2024-07-15 14:00:05.489793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.188 [2024-07-15 14:00:05.489818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.188 [2024-07-15 14:00:05.489827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.188 [2024-07-15 14:00:05.489833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.188 [2024-07-15 14:00:05.489851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.188 qpair failed and we were unable to recover it. 00:29:39.188 [2024-07-15 14:00:05.499631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.188 [2024-07-15 14:00:05.499727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.188 [2024-07-15 14:00:05.499745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.188 [2024-07-15 14:00:05.499752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.188 [2024-07-15 14:00:05.499758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.188 [2024-07-15 14:00:05.499774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.188 qpair failed and we were unable to recover it. 00:29:39.188 [2024-07-15 14:00:05.509739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.189 [2024-07-15 14:00:05.509824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.189 [2024-07-15 14:00:05.509840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.189 [2024-07-15 14:00:05.509848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.189 [2024-07-15 14:00:05.509854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.189 [2024-07-15 14:00:05.509868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.189 qpair failed and we were unable to recover it. 00:29:39.189 [2024-07-15 14:00:05.519778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.189 [2024-07-15 14:00:05.519859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.189 [2024-07-15 14:00:05.519874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.189 [2024-07-15 14:00:05.519882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.189 [2024-07-15 14:00:05.519888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.189 [2024-07-15 14:00:05.519902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.189 qpair failed and we were unable to recover it. 00:29:39.189 [2024-07-15 14:00:05.529801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.189 [2024-07-15 14:00:05.529890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.189 [2024-07-15 14:00:05.529906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.189 [2024-07-15 14:00:05.529913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.189 [2024-07-15 14:00:05.529919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.189 [2024-07-15 14:00:05.529933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.189 qpair failed and we were unable to recover it. 00:29:39.189 [2024-07-15 14:00:05.539808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.189 [2024-07-15 14:00:05.539890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.189 [2024-07-15 14:00:05.539906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.189 [2024-07-15 14:00:05.539913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.189 [2024-07-15 14:00:05.539918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.189 [2024-07-15 14:00:05.539932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.189 qpair failed and we were unable to recover it. 00:29:39.189 [2024-07-15 14:00:05.549774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.189 [2024-07-15 14:00:05.549849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.189 [2024-07-15 14:00:05.549864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.189 [2024-07-15 14:00:05.549871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.189 [2024-07-15 14:00:05.549881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.189 [2024-07-15 14:00:05.549895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.189 qpair failed and we were unable to recover it. 00:29:39.189 [2024-07-15 14:00:05.559891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.189 [2024-07-15 14:00:05.559971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.189 [2024-07-15 14:00:05.559988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.189 [2024-07-15 14:00:05.559994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.189 [2024-07-15 14:00:05.560000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.189 [2024-07-15 14:00:05.560014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.189 qpair failed and we were unable to recover it. 00:29:39.189 [2024-07-15 14:00:05.569936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.189 [2024-07-15 14:00:05.570018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.189 [2024-07-15 14:00:05.570035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.189 [2024-07-15 14:00:05.570042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.189 [2024-07-15 14:00:05.570048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.189 [2024-07-15 14:00:05.570061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.189 qpair failed and we were unable to recover it. 00:29:39.189 [2024-07-15 14:00:05.579934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.189 [2024-07-15 14:00:05.580008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.189 [2024-07-15 14:00:05.580024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.189 [2024-07-15 14:00:05.580031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.189 [2024-07-15 14:00:05.580037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.189 [2024-07-15 14:00:05.580051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.189 qpair failed and we were unable to recover it. 00:29:39.189 [2024-07-15 14:00:05.590016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.189 [2024-07-15 14:00:05.590095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.189 [2024-07-15 14:00:05.590112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.189 [2024-07-15 14:00:05.590118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.189 [2024-07-15 14:00:05.590131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.189 [2024-07-15 14:00:05.590145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.189 qpair failed and we were unable to recover it. 00:29:39.189 [2024-07-15 14:00:05.600044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.189 [2024-07-15 14:00:05.600133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.189 [2024-07-15 14:00:05.600150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.189 [2024-07-15 14:00:05.600157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.189 [2024-07-15 14:00:05.600163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.189 [2024-07-15 14:00:05.600178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.189 qpair failed and we were unable to recover it. 00:29:39.189 [2024-07-15 14:00:05.610071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.189 [2024-07-15 14:00:05.610156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.189 [2024-07-15 14:00:05.610172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.189 [2024-07-15 14:00:05.610179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.189 [2024-07-15 14:00:05.610185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.189 [2024-07-15 14:00:05.610200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.189 qpair failed and we were unable to recover it. 00:29:39.189 [2024-07-15 14:00:05.620051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.189 [2024-07-15 14:00:05.620140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.189 [2024-07-15 14:00:05.620156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.189 [2024-07-15 14:00:05.620163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.189 [2024-07-15 14:00:05.620169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.189 [2024-07-15 14:00:05.620184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.189 qpair failed and we were unable to recover it. 00:29:39.189 [2024-07-15 14:00:05.630072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.189 [2024-07-15 14:00:05.630148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.189 [2024-07-15 14:00:05.630164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.189 [2024-07-15 14:00:05.630171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.189 [2024-07-15 14:00:05.630177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.189 [2024-07-15 14:00:05.630191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.189 qpair failed and we were unable to recover it. 00:29:39.189 [2024-07-15 14:00:05.640151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.189 [2024-07-15 14:00:05.640257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.189 [2024-07-15 14:00:05.640273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.189 [2024-07-15 14:00:05.640280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.189 [2024-07-15 14:00:05.640290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.189 [2024-07-15 14:00:05.640304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.189 qpair failed and we were unable to recover it. 00:29:39.189 [2024-07-15 14:00:05.650133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.189 [2024-07-15 14:00:05.650213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.189 [2024-07-15 14:00:05.650229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.189 [2024-07-15 14:00:05.650236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.190 [2024-07-15 14:00:05.650242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.190 [2024-07-15 14:00:05.650256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.190 qpair failed and we were unable to recover it. 00:29:39.190 [2024-07-15 14:00:05.660172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.190 [2024-07-15 14:00:05.660250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.190 [2024-07-15 14:00:05.660266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.190 [2024-07-15 14:00:05.660272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.190 [2024-07-15 14:00:05.660278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.190 [2024-07-15 14:00:05.660293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.190 qpair failed and we were unable to recover it. 00:29:39.190 [2024-07-15 14:00:05.670093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.190 [2024-07-15 14:00:05.670170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.190 [2024-07-15 14:00:05.670186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.190 [2024-07-15 14:00:05.670193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.190 [2024-07-15 14:00:05.670198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.190 [2024-07-15 14:00:05.670213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.190 qpair failed and we were unable to recover it. 00:29:39.190 [2024-07-15 14:00:05.680220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.190 [2024-07-15 14:00:05.680304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.190 [2024-07-15 14:00:05.680319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.190 [2024-07-15 14:00:05.680326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.190 [2024-07-15 14:00:05.680332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.190 [2024-07-15 14:00:05.680347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.190 qpair failed and we were unable to recover it. 00:29:39.190 [2024-07-15 14:00:05.690245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.190 [2024-07-15 14:00:05.690331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.190 [2024-07-15 14:00:05.690347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.190 [2024-07-15 14:00:05.690354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.190 [2024-07-15 14:00:05.690360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.190 [2024-07-15 14:00:05.690374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.190 qpair failed and we were unable to recover it. 00:29:39.190 [2024-07-15 14:00:05.700191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.190 [2024-07-15 14:00:05.700269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.190 [2024-07-15 14:00:05.700285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.190 [2024-07-15 14:00:05.700292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.190 [2024-07-15 14:00:05.700298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.190 [2024-07-15 14:00:05.700312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.190 qpair failed and we were unable to recover it. 00:29:39.190 [2024-07-15 14:00:05.710298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.190 [2024-07-15 14:00:05.710375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.190 [2024-07-15 14:00:05.710392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.190 [2024-07-15 14:00:05.710399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.190 [2024-07-15 14:00:05.710405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.190 [2024-07-15 14:00:05.710419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.190 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.720365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.720446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.720461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.464 [2024-07-15 14:00:05.720468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.464 [2024-07-15 14:00:05.720474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.464 [2024-07-15 14:00:05.720489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.730475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.730570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.730585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.464 [2024-07-15 14:00:05.730596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.464 [2024-07-15 14:00:05.730602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.464 [2024-07-15 14:00:05.730616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.740370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.740449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.740464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.464 [2024-07-15 14:00:05.740471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.464 [2024-07-15 14:00:05.740477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.464 [2024-07-15 14:00:05.740491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.750453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.750532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.750548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.464 [2024-07-15 14:00:05.750555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.464 [2024-07-15 14:00:05.750561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.464 [2024-07-15 14:00:05.750574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.760472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.760565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.760580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.464 [2024-07-15 14:00:05.760587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.464 [2024-07-15 14:00:05.760593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.464 [2024-07-15 14:00:05.760607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.770502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.770590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.770606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.464 [2024-07-15 14:00:05.770613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.464 [2024-07-15 14:00:05.770619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.464 [2024-07-15 14:00:05.770632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.780464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.780540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.780556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.464 [2024-07-15 14:00:05.780563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.464 [2024-07-15 14:00:05.780569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.464 [2024-07-15 14:00:05.780582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.790523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.790634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.790650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.464 [2024-07-15 14:00:05.790657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.464 [2024-07-15 14:00:05.790663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.464 [2024-07-15 14:00:05.790677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.800571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.800654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.800670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.464 [2024-07-15 14:00:05.800677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.464 [2024-07-15 14:00:05.800684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.464 [2024-07-15 14:00:05.800698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.810570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.810655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.810671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.464 [2024-07-15 14:00:05.810678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.464 [2024-07-15 14:00:05.810684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.464 [2024-07-15 14:00:05.810698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.820520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.820597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.820613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.464 [2024-07-15 14:00:05.820623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.464 [2024-07-15 14:00:05.820629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.464 [2024-07-15 14:00:05.820643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.830669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.830755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.830771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.464 [2024-07-15 14:00:05.830779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.464 [2024-07-15 14:00:05.830785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.464 [2024-07-15 14:00:05.830800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.840561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.840644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.840660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.464 [2024-07-15 14:00:05.840667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.464 [2024-07-15 14:00:05.840673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.464 [2024-07-15 14:00:05.840687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.850723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.850812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.850828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.464 [2024-07-15 14:00:05.850835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.464 [2024-07-15 14:00:05.850842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.464 [2024-07-15 14:00:05.850856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.860754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.860843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.860869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.464 [2024-07-15 14:00:05.860878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.464 [2024-07-15 14:00:05.860885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.464 [2024-07-15 14:00:05.860903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.870761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.870850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.870876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.464 [2024-07-15 14:00:05.870884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.464 [2024-07-15 14:00:05.870891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.464 [2024-07-15 14:00:05.870909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.880714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.880803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.880828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.464 [2024-07-15 14:00:05.880837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.464 [2024-07-15 14:00:05.880843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.464 [2024-07-15 14:00:05.880862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.890799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.890888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.890914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.464 [2024-07-15 14:00:05.890922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.464 [2024-07-15 14:00:05.890929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.464 [2024-07-15 14:00:05.890947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.900823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.900906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.900923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.464 [2024-07-15 14:00:05.900931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.464 [2024-07-15 14:00:05.900937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.464 [2024-07-15 14:00:05.900952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.464 qpair failed and we were unable to recover it. 00:29:39.464 [2024-07-15 14:00:05.910885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.464 [2024-07-15 14:00:05.910975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.464 [2024-07-15 14:00:05.911001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.465 [2024-07-15 14:00:05.911014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.465 [2024-07-15 14:00:05.911021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.465 [2024-07-15 14:00:05.911039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-07-15 14:00:05.920905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.465 [2024-07-15 14:00:05.920986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.465 [2024-07-15 14:00:05.921005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.465 [2024-07-15 14:00:05.921012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.465 [2024-07-15 14:00:05.921018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.465 [2024-07-15 14:00:05.921034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-07-15 14:00:05.930906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.465 [2024-07-15 14:00:05.930989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.465 [2024-07-15 14:00:05.931003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.465 [2024-07-15 14:00:05.931010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.465 [2024-07-15 14:00:05.931016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.465 [2024-07-15 14:00:05.931030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-07-15 14:00:05.940955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.465 [2024-07-15 14:00:05.941045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.465 [2024-07-15 14:00:05.941061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.465 [2024-07-15 14:00:05.941067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.465 [2024-07-15 14:00:05.941073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.465 [2024-07-15 14:00:05.941087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-07-15 14:00:05.950970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.465 [2024-07-15 14:00:05.951047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.465 [2024-07-15 14:00:05.951063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.465 [2024-07-15 14:00:05.951070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.465 [2024-07-15 14:00:05.951076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.465 [2024-07-15 14:00:05.951090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-07-15 14:00:05.960995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.465 [2024-07-15 14:00:05.961075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.465 [2024-07-15 14:00:05.961091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.465 [2024-07-15 14:00:05.961098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.465 [2024-07-15 14:00:05.961103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.465 [2024-07-15 14:00:05.961117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-07-15 14:00:05.971032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.465 [2024-07-15 14:00:05.971113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.465 [2024-07-15 14:00:05.971134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.465 [2024-07-15 14:00:05.971142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.465 [2024-07-15 14:00:05.971148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.465 [2024-07-15 14:00:05.971162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.465 [2024-07-15 14:00:05.981074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.465 [2024-07-15 14:00:05.981154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.465 [2024-07-15 14:00:05.981170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.465 [2024-07-15 14:00:05.981176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.465 [2024-07-15 14:00:05.981182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.465 [2024-07-15 14:00:05.981197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.465 qpair failed and we were unable to recover it. 00:29:39.728 [2024-07-15 14:00:05.991100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.728 [2024-07-15 14:00:05.991194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.728 [2024-07-15 14:00:05.991210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.728 [2024-07-15 14:00:05.991218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.728 [2024-07-15 14:00:05.991224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.728 [2024-07-15 14:00:05.991238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.728 qpair failed and we were unable to recover it. 00:29:39.728 [2024-07-15 14:00:06.001142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.728 [2024-07-15 14:00:06.001221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.728 [2024-07-15 14:00:06.001241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.728 [2024-07-15 14:00:06.001248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.728 [2024-07-15 14:00:06.001254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.728 [2024-07-15 14:00:06.001268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.728 qpair failed and we were unable to recover it. 00:29:39.728 [2024-07-15 14:00:06.011133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.728 [2024-07-15 14:00:06.011220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.728 [2024-07-15 14:00:06.011236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.728 [2024-07-15 14:00:06.011243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.728 [2024-07-15 14:00:06.011249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.728 [2024-07-15 14:00:06.011263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.728 qpair failed and we were unable to recover it. 00:29:39.728 [2024-07-15 14:00:06.021215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.728 [2024-07-15 14:00:06.021289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.728 [2024-07-15 14:00:06.021305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.728 [2024-07-15 14:00:06.021312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.728 [2024-07-15 14:00:06.021318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.728 [2024-07-15 14:00:06.021332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.728 qpair failed and we were unable to recover it. 00:29:39.728 [2024-07-15 14:00:06.031195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.728 [2024-07-15 14:00:06.031274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.728 [2024-07-15 14:00:06.031290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.728 [2024-07-15 14:00:06.031297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.728 [2024-07-15 14:00:06.031304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.728 [2024-07-15 14:00:06.031318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.728 qpair failed and we were unable to recover it. 00:29:39.728 [2024-07-15 14:00:06.041230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.728 [2024-07-15 14:00:06.041312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.728 [2024-07-15 14:00:06.041328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.728 [2024-07-15 14:00:06.041336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.728 [2024-07-15 14:00:06.041341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.728 [2024-07-15 14:00:06.041356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.728 qpair failed and we were unable to recover it. 00:29:39.728 [2024-07-15 14:00:06.051205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.728 [2024-07-15 14:00:06.051285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.728 [2024-07-15 14:00:06.051302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.728 [2024-07-15 14:00:06.051309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.729 [2024-07-15 14:00:06.051315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.729 [2024-07-15 14:00:06.051329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.729 qpair failed and we were unable to recover it. 00:29:39.729 [2024-07-15 14:00:06.061304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.729 [2024-07-15 14:00:06.061399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.729 [2024-07-15 14:00:06.061415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.729 [2024-07-15 14:00:06.061422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.729 [2024-07-15 14:00:06.061428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.729 [2024-07-15 14:00:06.061442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.729 qpair failed and we were unable to recover it. 00:29:39.729 [2024-07-15 14:00:06.071340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.729 [2024-07-15 14:00:06.071427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.729 [2024-07-15 14:00:06.071442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.729 [2024-07-15 14:00:06.071449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.729 [2024-07-15 14:00:06.071455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.729 [2024-07-15 14:00:06.071469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.729 qpair failed and we were unable to recover it. 00:29:39.729 [2024-07-15 14:00:06.081363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.729 [2024-07-15 14:00:06.081441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.729 [2024-07-15 14:00:06.081457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.729 [2024-07-15 14:00:06.081464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.729 [2024-07-15 14:00:06.081470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.729 [2024-07-15 14:00:06.081483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.729 qpair failed and we were unable to recover it. 00:29:39.729 [2024-07-15 14:00:06.091445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.729 [2024-07-15 14:00:06.091556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.729 [2024-07-15 14:00:06.091576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.729 [2024-07-15 14:00:06.091584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.729 [2024-07-15 14:00:06.091590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.729 [2024-07-15 14:00:06.091604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.729 qpair failed and we were unable to recover it. 00:29:39.729 [2024-07-15 14:00:06.101428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.729 [2024-07-15 14:00:06.101508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.729 [2024-07-15 14:00:06.101524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.729 [2024-07-15 14:00:06.101531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.729 [2024-07-15 14:00:06.101536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.729 [2024-07-15 14:00:06.101550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.729 qpair failed and we were unable to recover it. 00:29:39.729 [2024-07-15 14:00:06.111533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.729 [2024-07-15 14:00:06.111621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.729 [2024-07-15 14:00:06.111637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.729 [2024-07-15 14:00:06.111644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.729 [2024-07-15 14:00:06.111650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.729 [2024-07-15 14:00:06.111664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.729 qpair failed and we were unable to recover it. 00:29:39.729 [2024-07-15 14:00:06.121463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.729 [2024-07-15 14:00:06.121542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.729 [2024-07-15 14:00:06.121559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.729 [2024-07-15 14:00:06.121566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.729 [2024-07-15 14:00:06.121572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.729 [2024-07-15 14:00:06.121586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.729 qpair failed and we were unable to recover it. 00:29:39.729 [2024-07-15 14:00:06.131544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.729 [2024-07-15 14:00:06.131630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.729 [2024-07-15 14:00:06.131650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.729 [2024-07-15 14:00:06.131657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.729 [2024-07-15 14:00:06.131664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.729 [2024-07-15 14:00:06.131682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.729 qpair failed and we were unable to recover it. 00:29:39.729 [2024-07-15 14:00:06.141515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.729 [2024-07-15 14:00:06.141607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.729 [2024-07-15 14:00:06.141624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.729 [2024-07-15 14:00:06.141631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.729 [2024-07-15 14:00:06.141637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.729 [2024-07-15 14:00:06.141652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.729 qpair failed and we were unable to recover it. 00:29:39.729 [2024-07-15 14:00:06.151534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.729 [2024-07-15 14:00:06.151612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.729 [2024-07-15 14:00:06.151628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.729 [2024-07-15 14:00:06.151635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.729 [2024-07-15 14:00:06.151642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.729 [2024-07-15 14:00:06.151655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.729 qpair failed and we were unable to recover it. 00:29:39.729 [2024-07-15 14:00:06.161569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.729 [2024-07-15 14:00:06.161645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.729 [2024-07-15 14:00:06.161662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.729 [2024-07-15 14:00:06.161669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.729 [2024-07-15 14:00:06.161675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.729 [2024-07-15 14:00:06.161689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.729 qpair failed and we were unable to recover it. 00:29:39.729 [2024-07-15 14:00:06.171594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.729 [2024-07-15 14:00:06.171681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.729 [2024-07-15 14:00:06.171697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.729 [2024-07-15 14:00:06.171704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.729 [2024-07-15 14:00:06.171710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.729 [2024-07-15 14:00:06.171723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.729 qpair failed and we were unable to recover it. 00:29:39.729 [2024-07-15 14:00:06.181639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.729 [2024-07-15 14:00:06.181725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.729 [2024-07-15 14:00:06.181755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.729 [2024-07-15 14:00:06.181764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.729 [2024-07-15 14:00:06.181770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.729 [2024-07-15 14:00:06.181790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.729 qpair failed and we were unable to recover it. 00:29:39.729 [2024-07-15 14:00:06.191657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.729 [2024-07-15 14:00:06.191740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.729 [2024-07-15 14:00:06.191766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.729 [2024-07-15 14:00:06.191775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.729 [2024-07-15 14:00:06.191784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.729 [2024-07-15 14:00:06.191802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.729 qpair failed and we were unable to recover it. 00:29:39.729 [2024-07-15 14:00:06.201617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.730 [2024-07-15 14:00:06.201698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.730 [2024-07-15 14:00:06.201715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.730 [2024-07-15 14:00:06.201723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.730 [2024-07-15 14:00:06.201729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.730 [2024-07-15 14:00:06.201744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.730 qpair failed and we were unable to recover it. 00:29:39.730 [2024-07-15 14:00:06.211615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.730 [2024-07-15 14:00:06.211705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.730 [2024-07-15 14:00:06.211722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.730 [2024-07-15 14:00:06.211729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.730 [2024-07-15 14:00:06.211735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.730 [2024-07-15 14:00:06.211750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.730 qpair failed and we were unable to recover it. 00:29:39.730 [2024-07-15 14:00:06.221740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.730 [2024-07-15 14:00:06.221820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.730 [2024-07-15 14:00:06.221837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.730 [2024-07-15 14:00:06.221843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.730 [2024-07-15 14:00:06.221849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.730 [2024-07-15 14:00:06.221868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.730 qpair failed and we were unable to recover it. 00:29:39.730 [2024-07-15 14:00:06.231786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.730 [2024-07-15 14:00:06.231868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.730 [2024-07-15 14:00:06.231895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.730 [2024-07-15 14:00:06.231904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.730 [2024-07-15 14:00:06.231910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.730 [2024-07-15 14:00:06.231929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.730 qpair failed and we were unable to recover it. 00:29:39.730 [2024-07-15 14:00:06.241709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.730 [2024-07-15 14:00:06.241794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.730 [2024-07-15 14:00:06.241820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.730 [2024-07-15 14:00:06.241828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.730 [2024-07-15 14:00:06.241835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.730 [2024-07-15 14:00:06.241853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.730 qpair failed and we were unable to recover it. 00:29:39.730 [2024-07-15 14:00:06.251853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.730 [2024-07-15 14:00:06.251938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.730 [2024-07-15 14:00:06.251956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.730 [2024-07-15 14:00:06.251964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.730 [2024-07-15 14:00:06.251970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.730 [2024-07-15 14:00:06.251985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.730 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 14:00:06.261832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.992 [2024-07-15 14:00:06.261909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.992 [2024-07-15 14:00:06.261925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.992 [2024-07-15 14:00:06.261932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.992 [2024-07-15 14:00:06.261938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.992 [2024-07-15 14:00:06.261953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 14:00:06.271872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.992 [2024-07-15 14:00:06.271948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.992 [2024-07-15 14:00:06.271969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.992 [2024-07-15 14:00:06.271976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.992 [2024-07-15 14:00:06.271982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.992 [2024-07-15 14:00:06.271996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 14:00:06.281898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.992 [2024-07-15 14:00:06.281978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.992 [2024-07-15 14:00:06.281994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.992 [2024-07-15 14:00:06.282001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.992 [2024-07-15 14:00:06.282007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.992 [2024-07-15 14:00:06.282022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 14:00:06.291962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.992 [2024-07-15 14:00:06.292049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.992 [2024-07-15 14:00:06.292065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.992 [2024-07-15 14:00:06.292072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.992 [2024-07-15 14:00:06.292078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.992 [2024-07-15 14:00:06.292092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 14:00:06.301924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.992 [2024-07-15 14:00:06.301996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.992 [2024-07-15 14:00:06.302012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.992 [2024-07-15 14:00:06.302019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.992 [2024-07-15 14:00:06.302025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.992 [2024-07-15 14:00:06.302039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 14:00:06.311978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.992 [2024-07-15 14:00:06.312056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.992 [2024-07-15 14:00:06.312072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.992 [2024-07-15 14:00:06.312079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.992 [2024-07-15 14:00:06.312085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.992 [2024-07-15 14:00:06.312103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.992 qpair failed and we were unable to recover it. 00:29:39.992 [2024-07-15 14:00:06.322054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.993 [2024-07-15 14:00:06.322140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.993 [2024-07-15 14:00:06.322158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.993 [2024-07-15 14:00:06.322165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.993 [2024-07-15 14:00:06.322171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.993 [2024-07-15 14:00:06.322186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 14:00:06.332057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.993 [2024-07-15 14:00:06.332143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.993 [2024-07-15 14:00:06.332161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.993 [2024-07-15 14:00:06.332168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.993 [2024-07-15 14:00:06.332175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.993 [2024-07-15 14:00:06.332189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 14:00:06.342029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.993 [2024-07-15 14:00:06.342103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.993 [2024-07-15 14:00:06.342118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.993 [2024-07-15 14:00:06.342130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.993 [2024-07-15 14:00:06.342136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.993 [2024-07-15 14:00:06.342150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 14:00:06.352095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.993 [2024-07-15 14:00:06.352175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.993 [2024-07-15 14:00:06.352191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.993 [2024-07-15 14:00:06.352198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.993 [2024-07-15 14:00:06.352204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.993 [2024-07-15 14:00:06.352218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 14:00:06.362143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.993 [2024-07-15 14:00:06.362222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.993 [2024-07-15 14:00:06.362241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.993 [2024-07-15 14:00:06.362249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.993 [2024-07-15 14:00:06.362254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.993 [2024-07-15 14:00:06.362269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 14:00:06.372085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.993 [2024-07-15 14:00:06.372178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.993 [2024-07-15 14:00:06.372194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.993 [2024-07-15 14:00:06.372201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.993 [2024-07-15 14:00:06.372207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.993 [2024-07-15 14:00:06.372221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 14:00:06.382151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.993 [2024-07-15 14:00:06.382270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.993 [2024-07-15 14:00:06.382286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.993 [2024-07-15 14:00:06.382292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.993 [2024-07-15 14:00:06.382298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.993 [2024-07-15 14:00:06.382313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 14:00:06.392251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.993 [2024-07-15 14:00:06.392330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.993 [2024-07-15 14:00:06.392345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.993 [2024-07-15 14:00:06.392352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.993 [2024-07-15 14:00:06.392358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.993 [2024-07-15 14:00:06.392372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 14:00:06.402269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.993 [2024-07-15 14:00:06.402361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.993 [2024-07-15 14:00:06.402377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.993 [2024-07-15 14:00:06.402383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.993 [2024-07-15 14:00:06.402393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.993 [2024-07-15 14:00:06.402408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 14:00:06.412275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.993 [2024-07-15 14:00:06.412365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.993 [2024-07-15 14:00:06.412381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.993 [2024-07-15 14:00:06.412388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.993 [2024-07-15 14:00:06.412394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.993 [2024-07-15 14:00:06.412408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 14:00:06.422226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.993 [2024-07-15 14:00:06.422354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.993 [2024-07-15 14:00:06.422370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.993 [2024-07-15 14:00:06.422378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.993 [2024-07-15 14:00:06.422383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.993 [2024-07-15 14:00:06.422398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 14:00:06.432352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.993 [2024-07-15 14:00:06.432432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.993 [2024-07-15 14:00:06.432448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.993 [2024-07-15 14:00:06.432455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.993 [2024-07-15 14:00:06.432461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.993 [2024-07-15 14:00:06.432475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 14:00:06.442408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.993 [2024-07-15 14:00:06.442494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.993 [2024-07-15 14:00:06.442510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.993 [2024-07-15 14:00:06.442517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.993 [2024-07-15 14:00:06.442523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.993 [2024-07-15 14:00:06.442537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 14:00:06.452409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.993 [2024-07-15 14:00:06.452496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.993 [2024-07-15 14:00:06.452512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.993 [2024-07-15 14:00:06.452519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.993 [2024-07-15 14:00:06.452525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.993 [2024-07-15 14:00:06.452539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.993 qpair failed and we were unable to recover it. 00:29:39.993 [2024-07-15 14:00:06.462307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.993 [2024-07-15 14:00:06.462378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.994 [2024-07-15 14:00:06.462393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.994 [2024-07-15 14:00:06.462400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.994 [2024-07-15 14:00:06.462406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.994 [2024-07-15 14:00:06.462421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 14:00:06.472436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.994 [2024-07-15 14:00:06.472511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.994 [2024-07-15 14:00:06.472527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.994 [2024-07-15 14:00:06.472534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.994 [2024-07-15 14:00:06.472540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.994 [2024-07-15 14:00:06.472554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 14:00:06.482502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.994 [2024-07-15 14:00:06.482581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.994 [2024-07-15 14:00:06.482597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.994 [2024-07-15 14:00:06.482604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.994 [2024-07-15 14:00:06.482610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.994 [2024-07-15 14:00:06.482624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 14:00:06.492512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.994 [2024-07-15 14:00:06.492593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.994 [2024-07-15 14:00:06.492609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.994 [2024-07-15 14:00:06.492616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.994 [2024-07-15 14:00:06.492626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.994 [2024-07-15 14:00:06.492640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 14:00:06.502505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.994 [2024-07-15 14:00:06.502576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.994 [2024-07-15 14:00:06.502591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.994 [2024-07-15 14:00:06.502599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.994 [2024-07-15 14:00:06.502604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.994 [2024-07-15 14:00:06.502618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.994 qpair failed and we were unable to recover it. 00:29:39.994 [2024-07-15 14:00:06.512464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.994 [2024-07-15 14:00:06.512542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.994 [2024-07-15 14:00:06.512558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.994 [2024-07-15 14:00:06.512565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.994 [2024-07-15 14:00:06.512571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:39.994 [2024-07-15 14:00:06.512584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.994 qpair failed and we were unable to recover it. 00:29:40.257 [2024-07-15 14:00:06.522634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.257 [2024-07-15 14:00:06.522754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.257 [2024-07-15 14:00:06.522770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.257 [2024-07-15 14:00:06.522778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.257 [2024-07-15 14:00:06.522784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.257 [2024-07-15 14:00:06.522798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.257 qpair failed and we were unable to recover it. 00:29:40.257 [2024-07-15 14:00:06.532719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.257 [2024-07-15 14:00:06.532813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.257 [2024-07-15 14:00:06.532829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.257 [2024-07-15 14:00:06.532836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.257 [2024-07-15 14:00:06.532842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.257 [2024-07-15 14:00:06.532856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.257 qpair failed and we were unable to recover it. 00:29:40.257 [2024-07-15 14:00:06.542643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.257 [2024-07-15 14:00:06.542724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.257 [2024-07-15 14:00:06.542749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.257 [2024-07-15 14:00:06.542758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.257 [2024-07-15 14:00:06.542764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.257 [2024-07-15 14:00:06.542783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.257 qpair failed and we were unable to recover it. 00:29:40.257 [2024-07-15 14:00:06.552732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.257 [2024-07-15 14:00:06.552811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.257 [2024-07-15 14:00:06.552829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.257 [2024-07-15 14:00:06.552836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.257 [2024-07-15 14:00:06.552842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.257 [2024-07-15 14:00:06.552857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.257 qpair failed and we were unable to recover it. 00:29:40.257 [2024-07-15 14:00:06.562749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.257 [2024-07-15 14:00:06.562826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.257 [2024-07-15 14:00:06.562842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.257 [2024-07-15 14:00:06.562849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.257 [2024-07-15 14:00:06.562855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.257 [2024-07-15 14:00:06.562869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.257 qpair failed and we were unable to recover it. 00:29:40.257 [2024-07-15 14:00:06.572650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.257 [2024-07-15 14:00:06.572733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.257 [2024-07-15 14:00:06.572749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.257 [2024-07-15 14:00:06.572756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.257 [2024-07-15 14:00:06.572762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.257 [2024-07-15 14:00:06.572776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.257 qpair failed and we were unable to recover it. 00:29:40.257 [2024-07-15 14:00:06.582730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.257 [2024-07-15 14:00:06.582806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.257 [2024-07-15 14:00:06.582822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.257 [2024-07-15 14:00:06.582829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.257 [2024-07-15 14:00:06.582839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.257 [2024-07-15 14:00:06.582854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.257 qpair failed and we were unable to recover it. 00:29:40.257 [2024-07-15 14:00:06.592784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.257 [2024-07-15 14:00:06.592863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.257 [2024-07-15 14:00:06.592879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.257 [2024-07-15 14:00:06.592886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.257 [2024-07-15 14:00:06.592892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.258 [2024-07-15 14:00:06.592906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.258 qpair failed and we were unable to recover it. 00:29:40.258 [2024-07-15 14:00:06.602819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.258 [2024-07-15 14:00:06.602924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.258 [2024-07-15 14:00:06.602941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.258 [2024-07-15 14:00:06.602949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.258 [2024-07-15 14:00:06.602955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.258 [2024-07-15 14:00:06.602974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.258 qpair failed and we were unable to recover it. 00:29:40.258 [2024-07-15 14:00:06.612872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.258 [2024-07-15 14:00:06.612969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.258 [2024-07-15 14:00:06.612986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.258 [2024-07-15 14:00:06.612992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.258 [2024-07-15 14:00:06.612998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.258 [2024-07-15 14:00:06.613013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.258 qpair failed and we were unable to recover it. 00:29:40.258 [2024-07-15 14:00:06.622874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.258 [2024-07-15 14:00:06.622948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.258 [2024-07-15 14:00:06.622965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.258 [2024-07-15 14:00:06.622972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.258 [2024-07-15 14:00:06.622978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.258 [2024-07-15 14:00:06.622992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.258 qpair failed and we were unable to recover it. 00:29:40.258 [2024-07-15 14:00:06.632901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.258 [2024-07-15 14:00:06.632980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.258 [2024-07-15 14:00:06.632996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.258 [2024-07-15 14:00:06.633003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.258 [2024-07-15 14:00:06.633009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.258 [2024-07-15 14:00:06.633023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.258 qpair failed and we were unable to recover it. 00:29:40.258 [2024-07-15 14:00:06.642977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.258 [2024-07-15 14:00:06.643058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.258 [2024-07-15 14:00:06.643074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.258 [2024-07-15 14:00:06.643081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.258 [2024-07-15 14:00:06.643087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.258 [2024-07-15 14:00:06.643101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.258 qpair failed and we were unable to recover it. 00:29:40.258 [2024-07-15 14:00:06.652990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.258 [2024-07-15 14:00:06.653068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.258 [2024-07-15 14:00:06.653084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.258 [2024-07-15 14:00:06.653091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.258 [2024-07-15 14:00:06.653097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.258 [2024-07-15 14:00:06.653110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.258 qpair failed and we were unable to recover it. 00:29:40.258 [2024-07-15 14:00:06.662968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.258 [2024-07-15 14:00:06.663042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.258 [2024-07-15 14:00:06.663057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.258 [2024-07-15 14:00:06.663064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.258 [2024-07-15 14:00:06.663070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.258 [2024-07-15 14:00:06.663085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.258 qpair failed and we were unable to recover it. 00:29:40.258 [2024-07-15 14:00:06.673036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.258 [2024-07-15 14:00:06.673156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.258 [2024-07-15 14:00:06.673173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.258 [2024-07-15 14:00:06.673184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.258 [2024-07-15 14:00:06.673190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.258 [2024-07-15 14:00:06.673204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.258 qpair failed and we were unable to recover it. 00:29:40.258 [2024-07-15 14:00:06.683045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.258 [2024-07-15 14:00:06.683136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.258 [2024-07-15 14:00:06.683153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.258 [2024-07-15 14:00:06.683159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.258 [2024-07-15 14:00:06.683165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.258 [2024-07-15 14:00:06.683180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.258 qpair failed and we were unable to recover it. 00:29:40.258 [2024-07-15 14:00:06.693055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.258 [2024-07-15 14:00:06.693146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.258 [2024-07-15 14:00:06.693162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.258 [2024-07-15 14:00:06.693169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.258 [2024-07-15 14:00:06.693176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.258 [2024-07-15 14:00:06.693190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.258 qpair failed and we were unable to recover it. 00:29:40.258 [2024-07-15 14:00:06.703044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.258 [2024-07-15 14:00:06.703126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.258 [2024-07-15 14:00:06.703142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.258 [2024-07-15 14:00:06.703149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.258 [2024-07-15 14:00:06.703155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.258 [2024-07-15 14:00:06.703169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.258 qpair failed and we were unable to recover it. 00:29:40.258 [2024-07-15 14:00:06.713012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.258 [2024-07-15 14:00:06.713085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.258 [2024-07-15 14:00:06.713101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.258 [2024-07-15 14:00:06.713109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.258 [2024-07-15 14:00:06.713115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.258 [2024-07-15 14:00:06.713132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.258 qpair failed and we were unable to recover it. 00:29:40.258 [2024-07-15 14:00:06.723163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.258 [2024-07-15 14:00:06.723271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.258 [2024-07-15 14:00:06.723288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.258 [2024-07-15 14:00:06.723295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.258 [2024-07-15 14:00:06.723301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.259 [2024-07-15 14:00:06.723315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.259 qpair failed and we were unable to recover it. 00:29:40.259 [2024-07-15 14:00:06.733205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.259 [2024-07-15 14:00:06.733289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.259 [2024-07-15 14:00:06.733306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.259 [2024-07-15 14:00:06.733313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.259 [2024-07-15 14:00:06.733319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.259 [2024-07-15 14:00:06.733333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.259 qpair failed and we were unable to recover it. 00:29:40.259 [2024-07-15 14:00:06.743152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.259 [2024-07-15 14:00:06.743232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.259 [2024-07-15 14:00:06.743248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.259 [2024-07-15 14:00:06.743255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.259 [2024-07-15 14:00:06.743261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.259 [2024-07-15 14:00:06.743275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.259 qpair failed and we were unable to recover it. 00:29:40.259 [2024-07-15 14:00:06.753256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.259 [2024-07-15 14:00:06.753339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.259 [2024-07-15 14:00:06.753356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.259 [2024-07-15 14:00:06.753363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.259 [2024-07-15 14:00:06.753369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.259 [2024-07-15 14:00:06.753384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.259 qpair failed and we were unable to recover it. 00:29:40.259 [2024-07-15 14:00:06.763282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.259 [2024-07-15 14:00:06.763365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.259 [2024-07-15 14:00:06.763381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.259 [2024-07-15 14:00:06.763393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.259 [2024-07-15 14:00:06.763399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.259 [2024-07-15 14:00:06.763413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.259 qpair failed and we were unable to recover it. 00:29:40.259 [2024-07-15 14:00:06.773204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.259 [2024-07-15 14:00:06.773371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.259 [2024-07-15 14:00:06.773388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.259 [2024-07-15 14:00:06.773395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.259 [2024-07-15 14:00:06.773401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.259 [2024-07-15 14:00:06.773415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.259 qpair failed and we were unable to recover it. 00:29:40.521 [2024-07-15 14:00:06.783270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.521 [2024-07-15 14:00:06.783340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.521 [2024-07-15 14:00:06.783355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.521 [2024-07-15 14:00:06.783363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.521 [2024-07-15 14:00:06.783369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.521 [2024-07-15 14:00:06.783383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.521 qpair failed and we were unable to recover it. 00:29:40.521 [2024-07-15 14:00:06.793404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.521 [2024-07-15 14:00:06.793522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.521 [2024-07-15 14:00:06.793539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.521 [2024-07-15 14:00:06.793546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.521 [2024-07-15 14:00:06.793552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.521 [2024-07-15 14:00:06.793566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.521 qpair failed and we were unable to recover it. 00:29:40.521 [2024-07-15 14:00:06.803359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.521 [2024-07-15 14:00:06.803436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.521 [2024-07-15 14:00:06.803452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.521 [2024-07-15 14:00:06.803459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.521 [2024-07-15 14:00:06.803465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.521 [2024-07-15 14:00:06.803479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.521 qpair failed and we were unable to recover it. 00:29:40.521 [2024-07-15 14:00:06.813325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.521 [2024-07-15 14:00:06.813440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.521 [2024-07-15 14:00:06.813457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.521 [2024-07-15 14:00:06.813464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.521 [2024-07-15 14:00:06.813470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.521 [2024-07-15 14:00:06.813485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.521 qpair failed and we were unable to recover it. 00:29:40.521 [2024-07-15 14:00:06.823299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.521 [2024-07-15 14:00:06.823371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.521 [2024-07-15 14:00:06.823387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.521 [2024-07-15 14:00:06.823395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.521 [2024-07-15 14:00:06.823401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.521 [2024-07-15 14:00:06.823416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.521 qpair failed and we were unable to recover it. 00:29:40.521 [2024-07-15 14:00:06.833461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.521 [2024-07-15 14:00:06.833538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.521 [2024-07-15 14:00:06.833554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.521 [2024-07-15 14:00:06.833561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.521 [2024-07-15 14:00:06.833567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.521 [2024-07-15 14:00:06.833581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.521 qpair failed and we were unable to recover it. 00:29:40.521 [2024-07-15 14:00:06.843498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.521 [2024-07-15 14:00:06.843567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.521 [2024-07-15 14:00:06.843583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.521 [2024-07-15 14:00:06.843591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.521 [2024-07-15 14:00:06.843596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.521 [2024-07-15 14:00:06.843611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.521 qpair failed and we were unable to recover it. 00:29:40.521 [2024-07-15 14:00:06.853473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.521 [2024-07-15 14:00:06.853548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.521 [2024-07-15 14:00:06.853564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.521 [2024-07-15 14:00:06.853580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.521 [2024-07-15 14:00:06.853586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.521 [2024-07-15 14:00:06.853600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.521 qpair failed and we were unable to recover it. 00:29:40.521 [2024-07-15 14:00:06.863515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.521 [2024-07-15 14:00:06.863601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.521 [2024-07-15 14:00:06.863617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.521 [2024-07-15 14:00:06.863624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.521 [2024-07-15 14:00:06.863631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.521 [2024-07-15 14:00:06.863645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.521 qpair failed and we were unable to recover it. 00:29:40.521 [2024-07-15 14:00:06.873594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.521 [2024-07-15 14:00:06.873679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.521 [2024-07-15 14:00:06.873695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.521 [2024-07-15 14:00:06.873702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.521 [2024-07-15 14:00:06.873708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.521 [2024-07-15 14:00:06.873722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.522 qpair failed and we were unable to recover it. 00:29:40.522 [2024-07-15 14:00:06.883548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.522 [2024-07-15 14:00:06.883625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.522 [2024-07-15 14:00:06.883640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.522 [2024-07-15 14:00:06.883647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.522 [2024-07-15 14:00:06.883653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.522 [2024-07-15 14:00:06.883667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.522 qpair failed and we were unable to recover it. 00:29:40.522 [2024-07-15 14:00:06.893628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.522 [2024-07-15 14:00:06.893706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.522 [2024-07-15 14:00:06.893722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.522 [2024-07-15 14:00:06.893729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.522 [2024-07-15 14:00:06.893735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.522 [2024-07-15 14:00:06.893749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.522 qpair failed and we were unable to recover it. 00:29:40.522 [2024-07-15 14:00:06.903593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.522 [2024-07-15 14:00:06.903663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.522 [2024-07-15 14:00:06.903679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.522 [2024-07-15 14:00:06.903685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.522 [2024-07-15 14:00:06.903692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.522 [2024-07-15 14:00:06.903705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.522 qpair failed and we were unable to recover it. 00:29:40.522 [2024-07-15 14:00:06.913660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.522 [2024-07-15 14:00:06.913736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.522 [2024-07-15 14:00:06.913752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.522 [2024-07-15 14:00:06.913760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.522 [2024-07-15 14:00:06.913766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.522 [2024-07-15 14:00:06.913780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.522 qpair failed and we were unable to recover it. 00:29:40.522 [2024-07-15 14:00:06.923647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.522 [2024-07-15 14:00:06.923726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.522 [2024-07-15 14:00:06.923752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.522 [2024-07-15 14:00:06.923760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.522 [2024-07-15 14:00:06.923767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.522 [2024-07-15 14:00:06.923785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.522 qpair failed and we were unable to recover it. 00:29:40.522 [2024-07-15 14:00:06.933672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.522 [2024-07-15 14:00:06.933753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.522 [2024-07-15 14:00:06.933770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.522 [2024-07-15 14:00:06.933778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.522 [2024-07-15 14:00:06.933784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.522 [2024-07-15 14:00:06.933799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.522 qpair failed and we were unable to recover it. 00:29:40.522 [2024-07-15 14:00:06.943718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.522 [2024-07-15 14:00:06.943801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.522 [2024-07-15 14:00:06.943832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.522 [2024-07-15 14:00:06.943841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.522 [2024-07-15 14:00:06.943848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.522 [2024-07-15 14:00:06.943867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.522 qpair failed and we were unable to recover it. 00:29:40.522 [2024-07-15 14:00:06.953796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.522 [2024-07-15 14:00:06.953879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.522 [2024-07-15 14:00:06.953905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.522 [2024-07-15 14:00:06.953914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.522 [2024-07-15 14:00:06.953921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.522 [2024-07-15 14:00:06.953940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.522 qpair failed and we were unable to recover it. 00:29:40.522 [2024-07-15 14:00:06.963757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.522 [2024-07-15 14:00:06.963833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.522 [2024-07-15 14:00:06.963851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.522 [2024-07-15 14:00:06.963858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.522 [2024-07-15 14:00:06.963864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.522 [2024-07-15 14:00:06.963880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.522 qpair failed and we were unable to recover it. 00:29:40.522 [2024-07-15 14:00:06.973819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.522 [2024-07-15 14:00:06.973902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.522 [2024-07-15 14:00:06.973918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.522 [2024-07-15 14:00:06.973925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.522 [2024-07-15 14:00:06.973932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.522 [2024-07-15 14:00:06.973947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.522 qpair failed and we were unable to recover it. 00:29:40.522 [2024-07-15 14:00:06.983802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.522 [2024-07-15 14:00:06.983873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.522 [2024-07-15 14:00:06.983889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.522 [2024-07-15 14:00:06.983897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.522 [2024-07-15 14:00:06.983903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.522 [2024-07-15 14:00:06.983917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.522 qpair failed and we were unable to recover it. 00:29:40.522 [2024-07-15 14:00:06.993874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.522 [2024-07-15 14:00:06.993954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.522 [2024-07-15 14:00:06.993971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.522 [2024-07-15 14:00:06.993978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.522 [2024-07-15 14:00:06.993984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.522 [2024-07-15 14:00:06.993999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.522 qpair failed and we were unable to recover it. 00:29:40.522 [2024-07-15 14:00:07.003895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.522 [2024-07-15 14:00:07.003982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.522 [2024-07-15 14:00:07.003998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.522 [2024-07-15 14:00:07.004005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.522 [2024-07-15 14:00:07.004011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.522 [2024-07-15 14:00:07.004026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.522 qpair failed and we were unable to recover it. 00:29:40.522 [2024-07-15 14:00:07.013786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.522 [2024-07-15 14:00:07.013865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.522 [2024-07-15 14:00:07.013882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.522 [2024-07-15 14:00:07.013889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.522 [2024-07-15 14:00:07.013895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.522 [2024-07-15 14:00:07.013909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.522 qpair failed and we were unable to recover it. 00:29:40.522 [2024-07-15 14:00:07.023906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.522 [2024-07-15 14:00:07.023981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.522 [2024-07-15 14:00:07.023997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.522 [2024-07-15 14:00:07.024006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.522 [2024-07-15 14:00:07.024013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.522 [2024-07-15 14:00:07.024027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.522 qpair failed and we were unable to recover it. 00:29:40.522 [2024-07-15 14:00:07.033882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.522 [2024-07-15 14:00:07.033958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.522 [2024-07-15 14:00:07.033977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.522 [2024-07-15 14:00:07.033985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.522 [2024-07-15 14:00:07.033991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.522 [2024-07-15 14:00:07.034005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.522 qpair failed and we were unable to recover it. 00:29:40.522 [2024-07-15 14:00:07.043969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.522 [2024-07-15 14:00:07.044063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.522 [2024-07-15 14:00:07.044079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.522 [2024-07-15 14:00:07.044087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.522 [2024-07-15 14:00:07.044093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.522 [2024-07-15 14:00:07.044107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.522 qpair failed and we were unable to recover it. 00:29:40.784 [2024-07-15 14:00:07.053957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.784 [2024-07-15 14:00:07.054117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.784 [2024-07-15 14:00:07.054137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.784 [2024-07-15 14:00:07.054144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.784 [2024-07-15 14:00:07.054150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.784 [2024-07-15 14:00:07.054165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.784 qpair failed and we were unable to recover it. 00:29:40.784 [2024-07-15 14:00:07.063929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.784 [2024-07-15 14:00:07.064031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.784 [2024-07-15 14:00:07.064047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.784 [2024-07-15 14:00:07.064054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.784 [2024-07-15 14:00:07.064061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.784 [2024-07-15 14:00:07.064075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.784 qpair failed and we were unable to recover it. 00:29:40.784 [2024-07-15 14:00:07.074067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.784 [2024-07-15 14:00:07.074141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.784 [2024-07-15 14:00:07.074157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.784 [2024-07-15 14:00:07.074164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.784 [2024-07-15 14:00:07.074171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.784 [2024-07-15 14:00:07.074189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.784 qpair failed and we were unable to recover it. 00:29:40.784 [2024-07-15 14:00:07.084111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.784 [2024-07-15 14:00:07.084189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.784 [2024-07-15 14:00:07.084206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.784 [2024-07-15 14:00:07.084213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.784 [2024-07-15 14:00:07.084219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.784 [2024-07-15 14:00:07.084234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.784 qpair failed and we were unable to recover it. 00:29:40.784 [2024-07-15 14:00:07.094121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.784 [2024-07-15 14:00:07.094199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.784 [2024-07-15 14:00:07.094216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.784 [2024-07-15 14:00:07.094224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.784 [2024-07-15 14:00:07.094230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.784 [2024-07-15 14:00:07.094245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.784 qpair failed and we were unable to recover it. 00:29:40.784 [2024-07-15 14:00:07.104131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.784 [2024-07-15 14:00:07.104242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.784 [2024-07-15 14:00:07.104258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.784 [2024-07-15 14:00:07.104266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.784 [2024-07-15 14:00:07.104272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.784 [2024-07-15 14:00:07.104286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.784 qpair failed and we were unable to recover it. 00:29:40.784 [2024-07-15 14:00:07.114165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.784 [2024-07-15 14:00:07.114236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.784 [2024-07-15 14:00:07.114252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.784 [2024-07-15 14:00:07.114259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.784 [2024-07-15 14:00:07.114265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.785 [2024-07-15 14:00:07.114280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.785 qpair failed and we were unable to recover it. 00:29:40.785 [2024-07-15 14:00:07.124191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.785 [2024-07-15 14:00:07.124350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.785 [2024-07-15 14:00:07.124370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.785 [2024-07-15 14:00:07.124378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.785 [2024-07-15 14:00:07.124384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.785 [2024-07-15 14:00:07.124398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.785 qpair failed and we were unable to recover it. 00:29:40.785 [2024-07-15 14:00:07.134299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.785 [2024-07-15 14:00:07.134377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.785 [2024-07-15 14:00:07.134393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.785 [2024-07-15 14:00:07.134401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.785 [2024-07-15 14:00:07.134407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.785 [2024-07-15 14:00:07.134421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.785 qpair failed and we were unable to recover it. 00:29:40.785 [2024-07-15 14:00:07.144283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.785 [2024-07-15 14:00:07.144359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.785 [2024-07-15 14:00:07.144375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.785 [2024-07-15 14:00:07.144382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.785 [2024-07-15 14:00:07.144388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.785 [2024-07-15 14:00:07.144403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.785 qpair failed and we were unable to recover it. 00:29:40.785 [2024-07-15 14:00:07.154220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.785 [2024-07-15 14:00:07.154294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.785 [2024-07-15 14:00:07.154311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.785 [2024-07-15 14:00:07.154318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.785 [2024-07-15 14:00:07.154324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.785 [2024-07-15 14:00:07.154339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.785 qpair failed and we were unable to recover it. 00:29:40.785 [2024-07-15 14:00:07.164430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.785 [2024-07-15 14:00:07.164503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.785 [2024-07-15 14:00:07.164520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.785 [2024-07-15 14:00:07.164528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.785 [2024-07-15 14:00:07.164534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.785 [2024-07-15 14:00:07.164553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.785 qpair failed and we were unable to recover it. 00:29:40.785 [2024-07-15 14:00:07.174353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.785 [2024-07-15 14:00:07.174472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.785 [2024-07-15 14:00:07.174488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.785 [2024-07-15 14:00:07.174496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.785 [2024-07-15 14:00:07.174502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.785 [2024-07-15 14:00:07.174517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.785 qpair failed and we were unable to recover it. 00:29:40.785 [2024-07-15 14:00:07.184362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.785 [2024-07-15 14:00:07.184433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.785 [2024-07-15 14:00:07.184449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.785 [2024-07-15 14:00:07.184456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.785 [2024-07-15 14:00:07.184462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.785 [2024-07-15 14:00:07.184476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.785 qpair failed and we were unable to recover it. 00:29:40.785 [2024-07-15 14:00:07.194404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.785 [2024-07-15 14:00:07.194472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.785 [2024-07-15 14:00:07.194489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.785 [2024-07-15 14:00:07.194496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.785 [2024-07-15 14:00:07.194502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.785 [2024-07-15 14:00:07.194517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.785 qpair failed and we were unable to recover it. 00:29:40.785 [2024-07-15 14:00:07.204418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.785 [2024-07-15 14:00:07.204493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.785 [2024-07-15 14:00:07.204510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.785 [2024-07-15 14:00:07.204518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.785 [2024-07-15 14:00:07.204524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.785 [2024-07-15 14:00:07.204539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.785 qpair failed and we were unable to recover it. 00:29:40.785 [2024-07-15 14:00:07.214493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.785 [2024-07-15 14:00:07.214575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.785 [2024-07-15 14:00:07.214595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.786 [2024-07-15 14:00:07.214603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.786 [2024-07-15 14:00:07.214609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.786 [2024-07-15 14:00:07.214623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.786 qpair failed and we were unable to recover it. 00:29:40.786 [2024-07-15 14:00:07.224466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.786 [2024-07-15 14:00:07.224542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.786 [2024-07-15 14:00:07.224558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.786 [2024-07-15 14:00:07.224566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.786 [2024-07-15 14:00:07.224572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.786 [2024-07-15 14:00:07.224587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.786 qpair failed and we were unable to recover it. 00:29:40.786 [2024-07-15 14:00:07.234491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.786 [2024-07-15 14:00:07.234570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.786 [2024-07-15 14:00:07.234589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.786 [2024-07-15 14:00:07.234597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.786 [2024-07-15 14:00:07.234603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.786 [2024-07-15 14:00:07.234618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.786 qpair failed and we were unable to recover it. 00:29:40.786 [2024-07-15 14:00:07.244519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.786 [2024-07-15 14:00:07.244592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.786 [2024-07-15 14:00:07.244609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.786 [2024-07-15 14:00:07.244616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.786 [2024-07-15 14:00:07.244623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.786 [2024-07-15 14:00:07.244638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.786 qpair failed and we were unable to recover it. 00:29:40.786 [2024-07-15 14:00:07.254502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.786 [2024-07-15 14:00:07.254580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.786 [2024-07-15 14:00:07.254596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.786 [2024-07-15 14:00:07.254604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.786 [2024-07-15 14:00:07.254611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.786 [2024-07-15 14:00:07.254628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.786 qpair failed and we were unable to recover it. 00:29:40.786 [2024-07-15 14:00:07.264606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.786 [2024-07-15 14:00:07.264693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.786 [2024-07-15 14:00:07.264710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.786 [2024-07-15 14:00:07.264717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.786 [2024-07-15 14:00:07.264724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.786 [2024-07-15 14:00:07.264738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.786 qpair failed and we were unable to recover it. 00:29:40.786 [2024-07-15 14:00:07.274705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.786 [2024-07-15 14:00:07.274776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.786 [2024-07-15 14:00:07.274792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.786 [2024-07-15 14:00:07.274799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.786 [2024-07-15 14:00:07.274805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.786 [2024-07-15 14:00:07.274819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.786 qpair failed and we were unable to recover it. 00:29:40.786 [2024-07-15 14:00:07.284624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.786 [2024-07-15 14:00:07.284721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.786 [2024-07-15 14:00:07.284746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.786 [2024-07-15 14:00:07.284755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.786 [2024-07-15 14:00:07.284762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.786 [2024-07-15 14:00:07.284781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.786 qpair failed and we were unable to recover it. 00:29:40.786 [2024-07-15 14:00:07.294644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.786 [2024-07-15 14:00:07.294770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.786 [2024-07-15 14:00:07.294787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.786 [2024-07-15 14:00:07.294795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.786 [2024-07-15 14:00:07.294802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.786 [2024-07-15 14:00:07.294817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.786 qpair failed and we were unable to recover it. 00:29:40.786 [2024-07-15 14:00:07.304587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.786 [2024-07-15 14:00:07.304659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.786 [2024-07-15 14:00:07.304679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.786 [2024-07-15 14:00:07.304687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.786 [2024-07-15 14:00:07.304694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:40.786 [2024-07-15 14:00:07.304709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.786 qpair failed and we were unable to recover it. 00:29:41.048 [2024-07-15 14:00:07.314702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.048 [2024-07-15 14:00:07.314784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.048 [2024-07-15 14:00:07.314810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.048 [2024-07-15 14:00:07.314819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.048 [2024-07-15 14:00:07.314826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.048 [2024-07-15 14:00:07.314844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.048 qpair failed and we were unable to recover it. 00:29:41.048 [2024-07-15 14:00:07.324745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.048 [2024-07-15 14:00:07.324826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.048 [2024-07-15 14:00:07.324852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.048 [2024-07-15 14:00:07.324861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.048 [2024-07-15 14:00:07.324868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.048 [2024-07-15 14:00:07.324886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.048 qpair failed and we were unable to recover it. 00:29:41.048 [2024-07-15 14:00:07.334759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.048 [2024-07-15 14:00:07.334838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.048 [2024-07-15 14:00:07.334856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.048 [2024-07-15 14:00:07.334863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.048 [2024-07-15 14:00:07.334870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.048 [2024-07-15 14:00:07.334884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.048 qpair failed and we were unable to recover it. 00:29:41.048 [2024-07-15 14:00:07.344739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.048 [2024-07-15 14:00:07.344812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.048 [2024-07-15 14:00:07.344828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.048 [2024-07-15 14:00:07.344835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.048 [2024-07-15 14:00:07.344846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.048 [2024-07-15 14:00:07.344861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.048 qpair failed and we were unable to recover it. 00:29:41.048 [2024-07-15 14:00:07.354877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.048 [2024-07-15 14:00:07.354992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.048 [2024-07-15 14:00:07.355009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.048 [2024-07-15 14:00:07.355017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.048 [2024-07-15 14:00:07.355023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.048 [2024-07-15 14:00:07.355038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.048 qpair failed and we were unable to recover it. 00:29:41.048 [2024-07-15 14:00:07.364836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.048 [2024-07-15 14:00:07.364908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.048 [2024-07-15 14:00:07.364924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.048 [2024-07-15 14:00:07.364932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.048 [2024-07-15 14:00:07.364938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.048 [2024-07-15 14:00:07.364952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.048 qpair failed and we were unable to recover it. 00:29:41.048 [2024-07-15 14:00:07.374877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.048 [2024-07-15 14:00:07.374956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.048 [2024-07-15 14:00:07.374973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.048 [2024-07-15 14:00:07.374980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.048 [2024-07-15 14:00:07.374987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.048 [2024-07-15 14:00:07.375001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.048 qpair failed and we were unable to recover it. 00:29:41.048 [2024-07-15 14:00:07.384860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.048 [2024-07-15 14:00:07.384932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.048 [2024-07-15 14:00:07.384948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.048 [2024-07-15 14:00:07.384956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.048 [2024-07-15 14:00:07.384962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.048 [2024-07-15 14:00:07.384976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.048 qpair failed and we were unable to recover it. 00:29:41.048 [2024-07-15 14:00:07.394938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.048 [2024-07-15 14:00:07.395017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.048 [2024-07-15 14:00:07.395033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.048 [2024-07-15 14:00:07.395041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.048 [2024-07-15 14:00:07.395047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.048 [2024-07-15 14:00:07.395061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.048 qpair failed and we were unable to recover it. 00:29:41.048 [2024-07-15 14:00:07.404946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.048 [2024-07-15 14:00:07.405047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.048 [2024-07-15 14:00:07.405063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.048 [2024-07-15 14:00:07.405071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.048 [2024-07-15 14:00:07.405077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.048 [2024-07-15 14:00:07.405092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.048 qpair failed and we were unable to recover it. 00:29:41.048 [2024-07-15 14:00:07.414978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.048 [2024-07-15 14:00:07.415057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.048 [2024-07-15 14:00:07.415073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.048 [2024-07-15 14:00:07.415080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.048 [2024-07-15 14:00:07.415086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.048 [2024-07-15 14:00:07.415100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.048 qpair failed and we were unable to recover it. 00:29:41.048 [2024-07-15 14:00:07.425001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.048 [2024-07-15 14:00:07.425071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.048 [2024-07-15 14:00:07.425087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.048 [2024-07-15 14:00:07.425094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.048 [2024-07-15 14:00:07.425100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.048 [2024-07-15 14:00:07.425115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.048 qpair failed and we were unable to recover it. 00:29:41.048 [2024-07-15 14:00:07.435110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.048 [2024-07-15 14:00:07.435189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.048 [2024-07-15 14:00:07.435206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.048 [2024-07-15 14:00:07.435214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.048 [2024-07-15 14:00:07.435224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.048 [2024-07-15 14:00:07.435239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.048 qpair failed and we were unable to recover it. 00:29:41.048 [2024-07-15 14:00:07.445009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.048 [2024-07-15 14:00:07.445080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.049 [2024-07-15 14:00:07.445096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.049 [2024-07-15 14:00:07.445103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.049 [2024-07-15 14:00:07.445110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.049 [2024-07-15 14:00:07.445128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.049 qpair failed and we were unable to recover it. 00:29:41.049 [2024-07-15 14:00:07.455081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.049 [2024-07-15 14:00:07.455165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.049 [2024-07-15 14:00:07.455181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.049 [2024-07-15 14:00:07.455189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.049 [2024-07-15 14:00:07.455195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.049 [2024-07-15 14:00:07.455210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.049 qpair failed and we were unable to recover it. 00:29:41.049 [2024-07-15 14:00:07.465138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.049 [2024-07-15 14:00:07.465210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.049 [2024-07-15 14:00:07.465226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.049 [2024-07-15 14:00:07.465233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.049 [2024-07-15 14:00:07.465240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.049 [2024-07-15 14:00:07.465254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.049 qpair failed and we were unable to recover it. 00:29:41.049 [2024-07-15 14:00:07.475132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.049 [2024-07-15 14:00:07.475204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.049 [2024-07-15 14:00:07.475220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.049 [2024-07-15 14:00:07.475227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.049 [2024-07-15 14:00:07.475234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.049 [2024-07-15 14:00:07.475248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.049 qpair failed and we were unable to recover it. 00:29:41.049 [2024-07-15 14:00:07.485155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.049 [2024-07-15 14:00:07.485245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.049 [2024-07-15 14:00:07.485261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.049 [2024-07-15 14:00:07.485269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.049 [2024-07-15 14:00:07.485275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.049 [2024-07-15 14:00:07.485289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.049 qpair failed and we were unable to recover it. 00:29:41.049 [2024-07-15 14:00:07.495180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.049 [2024-07-15 14:00:07.495263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.049 [2024-07-15 14:00:07.495279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.049 [2024-07-15 14:00:07.495286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.049 [2024-07-15 14:00:07.495293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.049 [2024-07-15 14:00:07.495307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.049 qpair failed and we were unable to recover it. 00:29:41.049 [2024-07-15 14:00:07.505275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.049 [2024-07-15 14:00:07.505397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.049 [2024-07-15 14:00:07.505413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.049 [2024-07-15 14:00:07.505420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.049 [2024-07-15 14:00:07.505426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.049 [2024-07-15 14:00:07.505440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.049 qpair failed and we were unable to recover it. 00:29:41.049 [2024-07-15 14:00:07.515275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.049 [2024-07-15 14:00:07.515375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.049 [2024-07-15 14:00:07.515391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.049 [2024-07-15 14:00:07.515398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.049 [2024-07-15 14:00:07.515405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.049 [2024-07-15 14:00:07.515419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.049 qpair failed and we were unable to recover it. 00:29:41.049 [2024-07-15 14:00:07.525304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.049 [2024-07-15 14:00:07.525463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.049 [2024-07-15 14:00:07.525479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.049 [2024-07-15 14:00:07.525487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.049 [2024-07-15 14:00:07.525496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.049 [2024-07-15 14:00:07.525511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.049 qpair failed and we were unable to recover it. 00:29:41.049 [2024-07-15 14:00:07.535288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.049 [2024-07-15 14:00:07.535378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.049 [2024-07-15 14:00:07.535395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.049 [2024-07-15 14:00:07.535402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.049 [2024-07-15 14:00:07.535409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.049 [2024-07-15 14:00:07.535423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.049 qpair failed and we were unable to recover it. 00:29:41.049 [2024-07-15 14:00:07.545357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.049 [2024-07-15 14:00:07.545436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.049 [2024-07-15 14:00:07.545452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.049 [2024-07-15 14:00:07.545460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.049 [2024-07-15 14:00:07.545466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.049 [2024-07-15 14:00:07.545481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.049 qpair failed and we were unable to recover it. 00:29:41.049 [2024-07-15 14:00:07.555363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.049 [2024-07-15 14:00:07.555433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.049 [2024-07-15 14:00:07.555449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.049 [2024-07-15 14:00:07.555456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.049 [2024-07-15 14:00:07.555463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.049 [2024-07-15 14:00:07.555476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.049 qpair failed and we were unable to recover it. 00:29:41.049 [2024-07-15 14:00:07.565513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.049 [2024-07-15 14:00:07.565584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.049 [2024-07-15 14:00:07.565600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.049 [2024-07-15 14:00:07.565607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.049 [2024-07-15 14:00:07.565614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.049 [2024-07-15 14:00:07.565629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.049 qpair failed and we were unable to recover it. 00:29:41.312 [2024-07-15 14:00:07.575303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.312 [2024-07-15 14:00:07.575386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.312 [2024-07-15 14:00:07.575403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.312 [2024-07-15 14:00:07.575410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.312 [2024-07-15 14:00:07.575416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.312 [2024-07-15 14:00:07.575431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.312 qpair failed and we were unable to recover it. 00:29:41.312 [2024-07-15 14:00:07.585448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.312 [2024-07-15 14:00:07.585522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.312 [2024-07-15 14:00:07.585538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.312 [2024-07-15 14:00:07.585546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.312 [2024-07-15 14:00:07.585552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.312 [2024-07-15 14:00:07.585567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.312 qpair failed and we were unable to recover it. 00:29:41.312 [2024-07-15 14:00:07.595511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.312 [2024-07-15 14:00:07.595594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.312 [2024-07-15 14:00:07.595611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.312 [2024-07-15 14:00:07.595621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.312 [2024-07-15 14:00:07.595627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.312 [2024-07-15 14:00:07.595641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.312 qpair failed and we were unable to recover it. 00:29:41.312 [2024-07-15 14:00:07.605468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.312 [2024-07-15 14:00:07.605543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.312 [2024-07-15 14:00:07.605559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.313 [2024-07-15 14:00:07.605566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.313 [2024-07-15 14:00:07.605573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.313 [2024-07-15 14:00:07.605587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.313 qpair failed and we were unable to recover it. 00:29:41.313 [2024-07-15 14:00:07.615504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.313 [2024-07-15 14:00:07.615577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.313 [2024-07-15 14:00:07.615593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.313 [2024-07-15 14:00:07.615603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.313 [2024-07-15 14:00:07.615611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.313 [2024-07-15 14:00:07.615625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.313 qpair failed and we were unable to recover it. 00:29:41.313 [2024-07-15 14:00:07.625538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.313 [2024-07-15 14:00:07.625610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.313 [2024-07-15 14:00:07.625626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.313 [2024-07-15 14:00:07.625634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.313 [2024-07-15 14:00:07.625640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.313 [2024-07-15 14:00:07.625655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.313 qpair failed and we were unable to recover it. 00:29:41.313 [2024-07-15 14:00:07.635570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.313 [2024-07-15 14:00:07.635681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.313 [2024-07-15 14:00:07.635697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.313 [2024-07-15 14:00:07.635705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.313 [2024-07-15 14:00:07.635711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.313 [2024-07-15 14:00:07.635725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.313 qpair failed and we were unable to recover it. 00:29:41.313 [2024-07-15 14:00:07.645641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.313 [2024-07-15 14:00:07.645760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.313 [2024-07-15 14:00:07.645776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.313 [2024-07-15 14:00:07.645783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.313 [2024-07-15 14:00:07.645790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.313 [2024-07-15 14:00:07.645804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.313 qpair failed and we were unable to recover it. 00:29:41.313 [2024-07-15 14:00:07.655630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.313 [2024-07-15 14:00:07.655703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.313 [2024-07-15 14:00:07.655719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.313 [2024-07-15 14:00:07.655727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.313 [2024-07-15 14:00:07.655734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.313 [2024-07-15 14:00:07.655749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.313 qpair failed and we were unable to recover it. 00:29:41.313 [2024-07-15 14:00:07.665661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.313 [2024-07-15 14:00:07.665740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.313 [2024-07-15 14:00:07.665766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.313 [2024-07-15 14:00:07.665774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.313 [2024-07-15 14:00:07.665782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.313 [2024-07-15 14:00:07.665801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.313 qpair failed and we were unable to recover it. 00:29:41.313 [2024-07-15 14:00:07.675768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.313 [2024-07-15 14:00:07.675848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.313 [2024-07-15 14:00:07.675873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.313 [2024-07-15 14:00:07.675882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.313 [2024-07-15 14:00:07.675889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.313 [2024-07-15 14:00:07.675908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.313 qpair failed and we were unable to recover it. 00:29:41.313 [2024-07-15 14:00:07.685693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.313 [2024-07-15 14:00:07.685772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.313 [2024-07-15 14:00:07.685798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.313 [2024-07-15 14:00:07.685807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.313 [2024-07-15 14:00:07.685814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.313 [2024-07-15 14:00:07.685833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.313 qpair failed and we were unable to recover it. 00:29:41.313 [2024-07-15 14:00:07.695736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.313 [2024-07-15 14:00:07.695818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.313 [2024-07-15 14:00:07.695844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.313 [2024-07-15 14:00:07.695853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.313 [2024-07-15 14:00:07.695860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.313 [2024-07-15 14:00:07.695879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.313 qpair failed and we were unable to recover it. 00:29:41.313 [2024-07-15 14:00:07.705779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.313 [2024-07-15 14:00:07.705851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.313 [2024-07-15 14:00:07.705869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.313 [2024-07-15 14:00:07.705882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.313 [2024-07-15 14:00:07.705889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.313 [2024-07-15 14:00:07.705904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.313 qpair failed and we were unable to recover it. 00:29:41.313 [2024-07-15 14:00:07.715771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.314 [2024-07-15 14:00:07.715848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.314 [2024-07-15 14:00:07.715874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.314 [2024-07-15 14:00:07.715883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.314 [2024-07-15 14:00:07.715890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.314 [2024-07-15 14:00:07.715908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.314 qpair failed and we were unable to recover it. 00:29:41.314 [2024-07-15 14:00:07.725857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.314 [2024-07-15 14:00:07.725936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.314 [2024-07-15 14:00:07.725954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.314 [2024-07-15 14:00:07.725962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.314 [2024-07-15 14:00:07.725968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.314 [2024-07-15 14:00:07.725983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.314 qpair failed and we were unable to recover it. 00:29:41.314 [2024-07-15 14:00:07.735823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.314 [2024-07-15 14:00:07.735909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.314 [2024-07-15 14:00:07.735936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.314 [2024-07-15 14:00:07.735945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.314 [2024-07-15 14:00:07.735951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.314 [2024-07-15 14:00:07.735970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.314 qpair failed and we were unable to recover it. 00:29:41.314 [2024-07-15 14:00:07.745762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.314 [2024-07-15 14:00:07.745842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.314 [2024-07-15 14:00:07.745868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.314 [2024-07-15 14:00:07.745877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.314 [2024-07-15 14:00:07.745884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.314 [2024-07-15 14:00:07.745903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.314 qpair failed and we were unable to recover it. 00:29:41.314 [2024-07-15 14:00:07.755885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.314 [2024-07-15 14:00:07.755957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.314 [2024-07-15 14:00:07.755975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.314 [2024-07-15 14:00:07.755983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.314 [2024-07-15 14:00:07.755990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.314 [2024-07-15 14:00:07.756005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.314 qpair failed and we were unable to recover it. 00:29:41.314 [2024-07-15 14:00:07.765912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.314 [2024-07-15 14:00:07.765989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.314 [2024-07-15 14:00:07.766005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.314 [2024-07-15 14:00:07.766012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.314 [2024-07-15 14:00:07.766019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.314 [2024-07-15 14:00:07.766033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.314 qpair failed and we were unable to recover it. 00:29:41.314 [2024-07-15 14:00:07.775955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.314 [2024-07-15 14:00:07.776035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.314 [2024-07-15 14:00:07.776052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.314 [2024-07-15 14:00:07.776060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.314 [2024-07-15 14:00:07.776066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.314 [2024-07-15 14:00:07.776080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.314 qpair failed and we were unable to recover it. 00:29:41.314 [2024-07-15 14:00:07.785993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.314 [2024-07-15 14:00:07.786064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.314 [2024-07-15 14:00:07.786080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.314 [2024-07-15 14:00:07.786088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.314 [2024-07-15 14:00:07.786095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.314 [2024-07-15 14:00:07.786110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.314 qpair failed and we were unable to recover it. 00:29:41.314 [2024-07-15 14:00:07.795990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.314 [2024-07-15 14:00:07.796064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.314 [2024-07-15 14:00:07.796079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.314 [2024-07-15 14:00:07.796090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.314 [2024-07-15 14:00:07.796097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.314 [2024-07-15 14:00:07.796112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.314 qpair failed and we were unable to recover it. 00:29:41.314 [2024-07-15 14:00:07.806012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.314 [2024-07-15 14:00:07.806087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.314 [2024-07-15 14:00:07.806103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.314 [2024-07-15 14:00:07.806110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.314 [2024-07-15 14:00:07.806117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.314 [2024-07-15 14:00:07.806136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.314 qpair failed and we were unable to recover it. 00:29:41.314 [2024-07-15 14:00:07.816045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.314 [2024-07-15 14:00:07.816128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.314 [2024-07-15 14:00:07.816145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.314 [2024-07-15 14:00:07.816152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.315 [2024-07-15 14:00:07.816158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.315 [2024-07-15 14:00:07.816172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.315 qpair failed and we were unable to recover it. 00:29:41.315 [2024-07-15 14:00:07.826125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.315 [2024-07-15 14:00:07.826209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.315 [2024-07-15 14:00:07.826225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.315 [2024-07-15 14:00:07.826233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.315 [2024-07-15 14:00:07.826239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.315 [2024-07-15 14:00:07.826254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.315 qpair failed and we were unable to recover it. 00:29:41.315 [2024-07-15 14:00:07.836137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.315 [2024-07-15 14:00:07.836220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.315 [2024-07-15 14:00:07.836236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.315 [2024-07-15 14:00:07.836243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.315 [2024-07-15 14:00:07.836249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.315 [2024-07-15 14:00:07.836264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.315 qpair failed and we were unable to recover it. 00:29:41.575 [2024-07-15 14:00:07.846150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:07.846225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:07.846241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:07.846249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:07.846255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:07.846270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:07.856189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:07.856315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:07.856332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:07.856339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:07.856346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:07.856360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:07.866137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:07.866210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:07.866226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:07.866233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:07.866241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:07.866256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:07.876095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:07.876172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:07.876188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:07.876195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:07.876202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:07.876216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:07.886248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:07.886324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:07.886339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:07.886352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:07.886358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:07.886373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:07.896261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:07.896340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:07.896356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:07.896363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:07.896370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:07.896385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:07.906368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:07.906482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:07.906498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:07.906505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:07.906512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:07.906526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:07.916377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:07.916447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:07.916463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:07.916470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:07.916476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:07.916491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:07.926274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:07.926349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:07.926365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:07.926372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:07.926378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:07.926393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:07.936375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:07.936463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:07.936480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:07.936487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:07.936494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:07.936508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:07.946416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:07.946490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:07.946506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:07.946514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:07.946521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:07.946535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:07.956423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:07.956494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:07.956510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:07.956517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:07.956525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:07.956539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:07.966431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:07.966536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:07.966552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:07.966559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:07.966566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:07.966580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:07.976526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:07.976638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:07.976658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:07.976665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:07.976672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:07.976686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:07.986498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:07.986574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:07.986590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:07.986598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:07.986604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:07.986619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:07.996554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:07.996629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:07.996645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:07.996653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:07.996660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:07.996674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:08.006558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:08.006633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:08.006649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:08.006656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:08.006663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:08.006677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:08.016586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:08.016700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:08.016716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:08.016724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:08.016730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:08.016747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:08.026604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:08.026680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:08.026696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:08.026703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:08.026710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:08.026724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:08.036658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:08.036729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:08.036745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:08.036753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:08.036760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:08.036774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:08.046670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:08.046750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:08.046776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:08.046784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:08.046792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:08.046811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:08.056597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:08.056673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:08.056691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:08.056699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:08.056706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:08.056721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:08.066733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:08.066811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:08.066834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:08.066841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:08.066848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:08.066864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:08.076760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:08.076829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:08.076847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:08.076855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:08.076862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:08.076877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:08.086813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:08.086888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:08.086904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:08.086912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:08.086919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:08.086933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.576 [2024-07-15 14:00:08.096809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.576 [2024-07-15 14:00:08.096891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.576 [2024-07-15 14:00:08.096909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.576 [2024-07-15 14:00:08.096917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.576 [2024-07-15 14:00:08.096923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.576 [2024-07-15 14:00:08.096938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.576 qpair failed and we were unable to recover it. 00:29:41.838 [2024-07-15 14:00:08.106817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-07-15 14:00:08.106890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.838 [2024-07-15 14:00:08.106907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.838 [2024-07-15 14:00:08.106915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.838 [2024-07-15 14:00:08.106922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.838 [2024-07-15 14:00:08.106941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-07-15 14:00:08.116873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-07-15 14:00:08.116944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.838 [2024-07-15 14:00:08.116961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.838 [2024-07-15 14:00:08.116968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.838 [2024-07-15 14:00:08.116975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.838 [2024-07-15 14:00:08.116989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-07-15 14:00:08.126792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-07-15 14:00:08.126866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.838 [2024-07-15 14:00:08.126882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.838 [2024-07-15 14:00:08.126890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.838 [2024-07-15 14:00:08.126897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.838 [2024-07-15 14:00:08.126911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-07-15 14:00:08.136944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-07-15 14:00:08.137024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.838 [2024-07-15 14:00:08.137040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.838 [2024-07-15 14:00:08.137048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.838 [2024-07-15 14:00:08.137054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.838 [2024-07-15 14:00:08.137069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-07-15 14:00:08.146939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-07-15 14:00:08.147010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.838 [2024-07-15 14:00:08.147026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.838 [2024-07-15 14:00:08.147034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.838 [2024-07-15 14:00:08.147041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.838 [2024-07-15 14:00:08.147056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-07-15 14:00:08.156923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-07-15 14:00:08.156994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.838 [2024-07-15 14:00:08.157014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.838 [2024-07-15 14:00:08.157021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.838 [2024-07-15 14:00:08.157028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.838 [2024-07-15 14:00:08.157043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-07-15 14:00:08.167011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-07-15 14:00:08.167083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.838 [2024-07-15 14:00:08.167099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.838 [2024-07-15 14:00:08.167107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.838 [2024-07-15 14:00:08.167113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.838 [2024-07-15 14:00:08.167133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-07-15 14:00:08.177036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-07-15 14:00:08.177206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.838 [2024-07-15 14:00:08.177223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.838 [2024-07-15 14:00:08.177231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.838 [2024-07-15 14:00:08.177237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.838 [2024-07-15 14:00:08.177251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-07-15 14:00:08.187050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-07-15 14:00:08.187126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.838 [2024-07-15 14:00:08.187142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.838 [2024-07-15 14:00:08.187150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.838 [2024-07-15 14:00:08.187156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.838 [2024-07-15 14:00:08.187170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-07-15 14:00:08.197128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-07-15 14:00:08.197203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.838 [2024-07-15 14:00:08.197219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.838 [2024-07-15 14:00:08.197226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.838 [2024-07-15 14:00:08.197233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.838 [2024-07-15 14:00:08.197251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-07-15 14:00:08.207109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-07-15 14:00:08.207184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.838 [2024-07-15 14:00:08.207201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.838 [2024-07-15 14:00:08.207208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.838 [2024-07-15 14:00:08.207215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.838 [2024-07-15 14:00:08.207230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-07-15 14:00:08.217183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-07-15 14:00:08.217281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.838 [2024-07-15 14:00:08.217297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.838 [2024-07-15 14:00:08.217305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.838 [2024-07-15 14:00:08.217311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.838 [2024-07-15 14:00:08.217326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.838 qpair failed and we were unable to recover it. 00:29:41.838 [2024-07-15 14:00:08.227165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.838 [2024-07-15 14:00:08.227239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-07-15 14:00:08.227255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-07-15 14:00:08.227263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-07-15 14:00:08.227270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.839 [2024-07-15 14:00:08.227284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-07-15 14:00:08.237179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-07-15 14:00:08.237249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-07-15 14:00:08.237265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-07-15 14:00:08.237273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-07-15 14:00:08.237279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.839 [2024-07-15 14:00:08.237294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-07-15 14:00:08.247238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-07-15 14:00:08.247313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-07-15 14:00:08.247334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-07-15 14:00:08.247342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-07-15 14:00:08.247348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.839 [2024-07-15 14:00:08.247363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-07-15 14:00:08.257275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-07-15 14:00:08.257365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-07-15 14:00:08.257382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-07-15 14:00:08.257390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-07-15 14:00:08.257396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.839 [2024-07-15 14:00:08.257410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-07-15 14:00:08.267266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-07-15 14:00:08.267333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-07-15 14:00:08.267348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-07-15 14:00:08.267355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-07-15 14:00:08.267362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.839 [2024-07-15 14:00:08.267376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-07-15 14:00:08.277328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-07-15 14:00:08.277404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-07-15 14:00:08.277420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-07-15 14:00:08.277427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-07-15 14:00:08.277434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.839 [2024-07-15 14:00:08.277448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-07-15 14:00:08.287366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-07-15 14:00:08.287443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-07-15 14:00:08.287459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-07-15 14:00:08.287466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-07-15 14:00:08.287476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.839 [2024-07-15 14:00:08.287490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-07-15 14:00:08.297394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-07-15 14:00:08.297474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-07-15 14:00:08.297490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-07-15 14:00:08.297497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-07-15 14:00:08.297504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.839 [2024-07-15 14:00:08.297519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-07-15 14:00:08.307400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-07-15 14:00:08.307471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-07-15 14:00:08.307487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-07-15 14:00:08.307494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-07-15 14:00:08.307501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.839 [2024-07-15 14:00:08.307515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-07-15 14:00:08.317419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-07-15 14:00:08.317492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-07-15 14:00:08.317509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-07-15 14:00:08.317516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-07-15 14:00:08.317523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.839 [2024-07-15 14:00:08.317537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-07-15 14:00:08.327480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-07-15 14:00:08.327554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-07-15 14:00:08.327570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-07-15 14:00:08.327578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-07-15 14:00:08.327584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.839 [2024-07-15 14:00:08.327599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-07-15 14:00:08.337462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-07-15 14:00:08.337541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-07-15 14:00:08.337558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-07-15 14:00:08.337565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-07-15 14:00:08.337572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.839 [2024-07-15 14:00:08.337587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-07-15 14:00:08.347483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-07-15 14:00:08.347556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-07-15 14:00:08.347571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-07-15 14:00:08.347578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-07-15 14:00:08.347586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.839 [2024-07-15 14:00:08.347600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.839 qpair failed and we were unable to recover it. 00:29:41.839 [2024-07-15 14:00:08.357509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.839 [2024-07-15 14:00:08.357581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.839 [2024-07-15 14:00:08.357597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.839 [2024-07-15 14:00:08.357605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.839 [2024-07-15 14:00:08.357611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:41.839 [2024-07-15 14:00:08.357626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.839 qpair failed and we were unable to recover it. 00:29:42.104 [2024-07-15 14:00:08.367561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.367636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.367652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.367659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.367667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.367681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.377568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.377641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.377657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.377664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.377679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.377693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.387614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.387686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.387703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.387710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.387717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.387731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.397616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.397690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.397706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.397714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.397721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.397735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.407650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.407723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.407738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.407746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.407754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.407768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.417695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.417769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.417788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.417795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.417803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.417818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.427695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.427779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.427805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.427815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.427822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.427841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.437749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.437832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.437858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.437866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.437873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.437892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.447789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.447861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.447880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.447887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.447894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.447910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.457813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.457898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.457924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.457933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.457940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.457958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.467810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.467883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.467901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.467909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.467921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.467937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.477843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.477916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.477933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.477940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.477947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.477961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.487872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.487943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.487959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.487966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.487974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.487990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.497902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.497980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.497997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.498004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.498011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.498026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.507938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.508016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.508032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.508041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.508047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.508061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.517952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.518030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.518046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.518054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.518061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.518075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.527993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.528068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.528084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.528092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.528099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.528113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.538014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.538093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.538109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.538117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.538127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.538142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.548038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.548111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.548130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.548138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.548145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.548159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.558098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.558170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.558186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.558197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.558205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.558219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.567994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.568068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.568084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.568091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.568098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.568112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.578152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.578231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.578247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.578254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.578261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.578275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.588150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.588225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.588241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.588249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.588256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.588270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.598218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.105 [2024-07-15 14:00:08.598299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.105 [2024-07-15 14:00:08.598315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.105 [2024-07-15 14:00:08.598322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.105 [2024-07-15 14:00:08.598329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.105 [2024-07-15 14:00:08.598344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.105 qpair failed and we were unable to recover it. 00:29:42.105 [2024-07-15 14:00:08.608109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.106 [2024-07-15 14:00:08.608187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.106 [2024-07-15 14:00:08.608203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.106 [2024-07-15 14:00:08.608211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.106 [2024-07-15 14:00:08.608217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.106 [2024-07-15 14:00:08.608232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.106 qpair failed and we were unable to recover it. 00:29:42.106 [2024-07-15 14:00:08.618159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.106 [2024-07-15 14:00:08.618235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.106 [2024-07-15 14:00:08.618252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.106 [2024-07-15 14:00:08.618260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.106 [2024-07-15 14:00:08.618267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.106 [2024-07-15 14:00:08.618283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.106 qpair failed and we were unable to recover it. 00:29:42.420 [2024-07-15 14:00:08.628223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.420 [2024-07-15 14:00:08.628298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.420 [2024-07-15 14:00:08.628314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.420 [2024-07-15 14:00:08.628322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.420 [2024-07-15 14:00:08.628328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.420 [2024-07-15 14:00:08.628343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.420 qpair failed and we were unable to recover it. 00:29:42.420 [2024-07-15 14:00:08.638275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.420 [2024-07-15 14:00:08.638347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.420 [2024-07-15 14:00:08.638364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.420 [2024-07-15 14:00:08.638371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.420 [2024-07-15 14:00:08.638378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.420 [2024-07-15 14:00:08.638393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.420 qpair failed and we were unable to recover it. 00:29:42.420 [2024-07-15 14:00:08.648302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.420 [2024-07-15 14:00:08.648378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.420 [2024-07-15 14:00:08.648396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.420 [2024-07-15 14:00:08.648409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.420 [2024-07-15 14:00:08.648416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.420 [2024-07-15 14:00:08.648430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.420 qpair failed and we were unable to recover it. 00:29:42.420 [2024-07-15 14:00:08.658286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.420 [2024-07-15 14:00:08.658362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.420 [2024-07-15 14:00:08.658378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.420 [2024-07-15 14:00:08.658385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.420 [2024-07-15 14:00:08.658391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.420 [2024-07-15 14:00:08.658406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.420 qpair failed and we were unable to recover it. 00:29:42.420 [2024-07-15 14:00:08.668376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.420 [2024-07-15 14:00:08.668453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.420 [2024-07-15 14:00:08.668469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.420 [2024-07-15 14:00:08.668476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.420 [2024-07-15 14:00:08.668483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.420 [2024-07-15 14:00:08.668497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.420 qpair failed and we were unable to recover it. 00:29:42.420 [2024-07-15 14:00:08.678408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.420 [2024-07-15 14:00:08.678480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.420 [2024-07-15 14:00:08.678497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.420 [2024-07-15 14:00:08.678505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.420 [2024-07-15 14:00:08.678512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.420 [2024-07-15 14:00:08.678526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.420 qpair failed and we were unable to recover it. 00:29:42.420 [2024-07-15 14:00:08.688611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.420 [2024-07-15 14:00:08.688685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.420 [2024-07-15 14:00:08.688701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.420 [2024-07-15 14:00:08.688708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.420 [2024-07-15 14:00:08.688715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.420 [2024-07-15 14:00:08.688729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.420 qpair failed and we were unable to recover it. 00:29:42.420 [2024-07-15 14:00:08.698462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.420 [2024-07-15 14:00:08.698538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.420 [2024-07-15 14:00:08.698554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.420 [2024-07-15 14:00:08.698562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.420 [2024-07-15 14:00:08.698569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.420 [2024-07-15 14:00:08.698583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.420 qpair failed and we were unable to recover it. 00:29:42.420 [2024-07-15 14:00:08.708459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.420 [2024-07-15 14:00:08.708529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.420 [2024-07-15 14:00:08.708546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.420 [2024-07-15 14:00:08.708553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.420 [2024-07-15 14:00:08.708560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.420 [2024-07-15 14:00:08.708574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.420 qpair failed and we were unable to recover it. 00:29:42.420 [2024-07-15 14:00:08.718504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.420 [2024-07-15 14:00:08.718579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.420 [2024-07-15 14:00:08.718595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.420 [2024-07-15 14:00:08.718602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.421 [2024-07-15 14:00:08.718609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.421 [2024-07-15 14:00:08.718623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.421 qpair failed and we were unable to recover it. 00:29:42.421 [2024-07-15 14:00:08.728485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.421 [2024-07-15 14:00:08.728562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.421 [2024-07-15 14:00:08.728578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.421 [2024-07-15 14:00:08.728585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.421 [2024-07-15 14:00:08.728592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.421 [2024-07-15 14:00:08.728606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.421 qpair failed and we were unable to recover it. 00:29:42.421 [2024-07-15 14:00:08.738555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.421 [2024-07-15 14:00:08.738633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.421 [2024-07-15 14:00:08.738648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.421 [2024-07-15 14:00:08.738661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.421 [2024-07-15 14:00:08.738667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.421 [2024-07-15 14:00:08.738682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.421 qpair failed and we were unable to recover it. 00:29:42.421 [2024-07-15 14:00:08.748610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.421 [2024-07-15 14:00:08.748691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.421 [2024-07-15 14:00:08.748707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.421 [2024-07-15 14:00:08.748715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.421 [2024-07-15 14:00:08.748721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.421 [2024-07-15 14:00:08.748736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.421 qpair failed and we were unable to recover it. 00:29:42.421 [2024-07-15 14:00:08.758645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.421 [2024-07-15 14:00:08.758717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.421 [2024-07-15 14:00:08.758733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.421 [2024-07-15 14:00:08.758740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.421 [2024-07-15 14:00:08.758746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.421 [2024-07-15 14:00:08.758761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.421 qpair failed and we were unable to recover it. 00:29:42.421 [2024-07-15 14:00:08.768633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.421 [2024-07-15 14:00:08.768704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.421 [2024-07-15 14:00:08.768719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.421 [2024-07-15 14:00:08.768726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.421 [2024-07-15 14:00:08.768733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.421 [2024-07-15 14:00:08.768747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.421 qpair failed and we were unable to recover it. 00:29:42.421 [2024-07-15 14:00:08.778664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.421 [2024-07-15 14:00:08.778738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.421 [2024-07-15 14:00:08.778754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.421 [2024-07-15 14:00:08.778761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.421 [2024-07-15 14:00:08.778768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.421 [2024-07-15 14:00:08.778783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.421 qpair failed and we were unable to recover it. 00:29:42.421 [2024-07-15 14:00:08.788692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.421 [2024-07-15 14:00:08.788774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.421 [2024-07-15 14:00:08.788800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.421 [2024-07-15 14:00:08.788809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.421 [2024-07-15 14:00:08.788816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.421 [2024-07-15 14:00:08.788835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.421 qpair failed and we were unable to recover it. 00:29:42.421 [2024-07-15 14:00:08.798732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.421 [2024-07-15 14:00:08.798806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.421 [2024-07-15 14:00:08.798833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.421 [2024-07-15 14:00:08.798842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.421 [2024-07-15 14:00:08.798849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.421 [2024-07-15 14:00:08.798868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.421 qpair failed and we were unable to recover it. 00:29:42.421 [2024-07-15 14:00:08.808777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.421 [2024-07-15 14:00:08.808854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.421 [2024-07-15 14:00:08.808881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.421 [2024-07-15 14:00:08.808890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.421 [2024-07-15 14:00:08.808897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.421 [2024-07-15 14:00:08.808915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.421 qpair failed and we were unable to recover it. 00:29:42.421 [2024-07-15 14:00:08.818685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.421 [2024-07-15 14:00:08.818770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.421 [2024-07-15 14:00:08.818796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.421 [2024-07-15 14:00:08.818805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.421 [2024-07-15 14:00:08.818812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.421 [2024-07-15 14:00:08.818831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.421 qpair failed and we were unable to recover it. 00:29:42.421 [2024-07-15 14:00:08.828797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.421 [2024-07-15 14:00:08.828872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.421 [2024-07-15 14:00:08.828890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.421 [2024-07-15 14:00:08.828903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.421 [2024-07-15 14:00:08.828910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.421 [2024-07-15 14:00:08.828926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.421 qpair failed and we were unable to recover it. 00:29:42.421 [2024-07-15 14:00:08.838844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.421 [2024-07-15 14:00:08.838928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.421 [2024-07-15 14:00:08.838955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.421 [2024-07-15 14:00:08.838964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.421 [2024-07-15 14:00:08.838971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.421 [2024-07-15 14:00:08.838990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.421 qpair failed and we were unable to recover it. 00:29:42.421 [2024-07-15 14:00:08.848855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.421 [2024-07-15 14:00:08.848928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.421 [2024-07-15 14:00:08.848945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.421 [2024-07-15 14:00:08.848953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.421 [2024-07-15 14:00:08.848960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.421 [2024-07-15 14:00:08.848975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.421 qpair failed and we were unable to recover it. 00:29:42.421 [2024-07-15 14:00:08.858895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.421 [2024-07-15 14:00:08.858965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.421 [2024-07-15 14:00:08.858980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.421 [2024-07-15 14:00:08.858987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.422 [2024-07-15 14:00:08.858993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.422 [2024-07-15 14:00:08.859007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.422 qpair failed and we were unable to recover it. 00:29:42.422 [2024-07-15 14:00:08.868931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.422 [2024-07-15 14:00:08.869004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.422 [2024-07-15 14:00:08.869021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.422 [2024-07-15 14:00:08.869027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.422 [2024-07-15 14:00:08.869034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.422 [2024-07-15 14:00:08.869048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.422 qpair failed and we were unable to recover it. 00:29:42.422 [2024-07-15 14:00:08.878937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.422 [2024-07-15 14:00:08.879010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.422 [2024-07-15 14:00:08.879027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.422 [2024-07-15 14:00:08.879034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.422 [2024-07-15 14:00:08.879041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.422 [2024-07-15 14:00:08.879055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.422 qpair failed and we were unable to recover it. 00:29:42.422 [2024-07-15 14:00:08.888986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.422 [2024-07-15 14:00:08.889062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.422 [2024-07-15 14:00:08.889078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.422 [2024-07-15 14:00:08.889086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.422 [2024-07-15 14:00:08.889092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.422 [2024-07-15 14:00:08.889106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.422 qpair failed and we were unable to recover it. 00:29:42.422 [2024-07-15 14:00:08.898985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.422 [2024-07-15 14:00:08.899054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.422 [2024-07-15 14:00:08.899070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.422 [2024-07-15 14:00:08.899077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.422 [2024-07-15 14:00:08.899083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.422 [2024-07-15 14:00:08.899097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.422 qpair failed and we were unable to recover it. 00:29:42.422 [2024-07-15 14:00:08.909025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.422 [2024-07-15 14:00:08.909134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.422 [2024-07-15 14:00:08.909150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.422 [2024-07-15 14:00:08.909158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.422 [2024-07-15 14:00:08.909164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.422 [2024-07-15 14:00:08.909178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.422 qpair failed and we were unable to recover it. 00:29:42.422 [2024-07-15 14:00:08.919051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.422 [2024-07-15 14:00:08.919134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.422 [2024-07-15 14:00:08.919154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.422 [2024-07-15 14:00:08.919162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.422 [2024-07-15 14:00:08.919169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.422 [2024-07-15 14:00:08.919183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.422 qpair failed and we were unable to recover it. 00:29:42.422 [2024-07-15 14:00:08.929074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.422 [2024-07-15 14:00:08.929149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.422 [2024-07-15 14:00:08.929165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.422 [2024-07-15 14:00:08.929173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.422 [2024-07-15 14:00:08.929180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.422 [2024-07-15 14:00:08.929194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.422 qpair failed and we were unable to recover it. 00:29:42.422 [2024-07-15 14:00:08.939225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.422 [2024-07-15 14:00:08.939307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.422 [2024-07-15 14:00:08.939322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.422 [2024-07-15 14:00:08.939330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.422 [2024-07-15 14:00:08.939336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.422 [2024-07-15 14:00:08.939351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.422 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:08.949118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:08.949189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:08.949205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:08.949213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:08.949219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:08.949234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:08.959156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:08.959231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:08.959247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:08.959255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:08.959261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:08.959276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:08.969233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:08.969310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:08.969325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:08.969333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:08.969340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:08.969354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:08.979213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:08.979295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:08.979310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:08.979318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:08.979324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:08.979338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:08.989235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:08.989309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:08.989324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:08.989332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:08.989339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:08.989354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:08.999263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:08.999333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:08.999349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:08.999357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:08.999364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:08.999378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:09.009293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:09.009451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:09.009471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:09.009479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:09.009485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:09.009499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:09.019326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:09.019405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:09.019421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:09.019428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:09.019435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:09.019450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:09.029397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:09.029484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:09.029500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:09.029508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:09.029514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:09.029528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:09.039387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:09.039459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:09.039476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:09.039483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:09.039490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:09.039505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:09.049520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:09.049594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:09.049610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:09.049617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:09.049623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:09.049642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:09.059451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:09.059529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:09.059544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:09.059552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:09.059559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:09.059573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:09.069437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:09.069514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:09.069530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:09.069538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:09.069545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:09.069559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:09.079511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:09.079623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:09.079640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:09.079647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:09.079653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:09.079668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:09.089509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:09.089583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:09.089600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:09.089607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:09.089614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:09.089629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:09.099533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:09.099609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:09.099628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:09.099636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:09.099644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:09.099658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:09.109537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:09.109614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:09.109630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:09.109637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:09.109644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:09.109658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:09.119576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:09.119647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:09.119663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:09.119670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:09.119678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:09.119692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:09.129637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:09.129709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:09.129725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:09.129732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:09.129738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:09.129753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:09.139661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.693 [2024-07-15 14:00:09.139734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.693 [2024-07-15 14:00:09.139750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.693 [2024-07-15 14:00:09.139757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.693 [2024-07-15 14:00:09.139763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.693 [2024-07-15 14:00:09.139785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.693 qpair failed and we were unable to recover it. 00:29:42.693 [2024-07-15 14:00:09.149654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.694 [2024-07-15 14:00:09.149730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.694 [2024-07-15 14:00:09.149755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.694 [2024-07-15 14:00:09.149764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.694 [2024-07-15 14:00:09.149771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.694 [2024-07-15 14:00:09.149790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.694 qpair failed and we were unable to recover it. 00:29:42.694 [2024-07-15 14:00:09.159642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.694 [2024-07-15 14:00:09.159771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.694 [2024-07-15 14:00:09.159797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.694 [2024-07-15 14:00:09.159806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.694 [2024-07-15 14:00:09.159813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.694 [2024-07-15 14:00:09.159831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.694 qpair failed and we were unable to recover it. 00:29:42.694 [2024-07-15 14:00:09.169711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.694 [2024-07-15 14:00:09.169793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.694 [2024-07-15 14:00:09.169819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.694 [2024-07-15 14:00:09.169828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.694 [2024-07-15 14:00:09.169835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.694 [2024-07-15 14:00:09.169854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.694 qpair failed and we were unable to recover it. 00:29:42.694 [2024-07-15 14:00:09.179772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.694 [2024-07-15 14:00:09.179859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.694 [2024-07-15 14:00:09.179885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.694 [2024-07-15 14:00:09.179894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.694 [2024-07-15 14:00:09.179901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.694 [2024-07-15 14:00:09.179919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.694 qpair failed and we were unable to recover it. 00:29:42.694 [2024-07-15 14:00:09.189788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.694 [2024-07-15 14:00:09.189917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.694 [2024-07-15 14:00:09.189948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.694 [2024-07-15 14:00:09.189958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.694 [2024-07-15 14:00:09.189965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.694 [2024-07-15 14:00:09.189984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.694 qpair failed and we were unable to recover it. 00:29:42.694 [2024-07-15 14:00:09.199733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.694 [2024-07-15 14:00:09.199808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.694 [2024-07-15 14:00:09.199827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.694 [2024-07-15 14:00:09.199836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.694 [2024-07-15 14:00:09.199842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.694 [2024-07-15 14:00:09.199859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.694 qpair failed and we were unable to recover it. 00:29:42.694 [2024-07-15 14:00:09.209815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.694 [2024-07-15 14:00:09.209889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.694 [2024-07-15 14:00:09.209905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.694 [2024-07-15 14:00:09.209913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.694 [2024-07-15 14:00:09.209919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.694 [2024-07-15 14:00:09.209934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.694 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-15 14:00:09.219864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.955 [2024-07-15 14:00:09.219981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.955 [2024-07-15 14:00:09.219998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.955 [2024-07-15 14:00:09.220005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.955 [2024-07-15 14:00:09.220012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.955 [2024-07-15 14:00:09.220026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-15 14:00:09.229888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.955 [2024-07-15 14:00:09.230043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.956 [2024-07-15 14:00:09.230059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.956 [2024-07-15 14:00:09.230066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.956 [2024-07-15 14:00:09.230077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.956 [2024-07-15 14:00:09.230091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-15 14:00:09.239932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.956 [2024-07-15 14:00:09.240005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.956 [2024-07-15 14:00:09.240021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.956 [2024-07-15 14:00:09.240028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.956 [2024-07-15 14:00:09.240035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.956 [2024-07-15 14:00:09.240049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-15 14:00:09.249923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.956 [2024-07-15 14:00:09.249995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.956 [2024-07-15 14:00:09.250011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.956 [2024-07-15 14:00:09.250019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.956 [2024-07-15 14:00:09.250026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.956 [2024-07-15 14:00:09.250041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-15 14:00:09.259946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.956 [2024-07-15 14:00:09.260028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.956 [2024-07-15 14:00:09.260045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.956 [2024-07-15 14:00:09.260052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.956 [2024-07-15 14:00:09.260059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.956 [2024-07-15 14:00:09.260073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-15 14:00:09.269955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.956 [2024-07-15 14:00:09.270024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.956 [2024-07-15 14:00:09.270040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.956 [2024-07-15 14:00:09.270049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.956 [2024-07-15 14:00:09.270056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.956 [2024-07-15 14:00:09.270071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-15 14:00:09.280004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.956 [2024-07-15 14:00:09.280079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.956 [2024-07-15 14:00:09.280099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.956 [2024-07-15 14:00:09.280107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.956 [2024-07-15 14:00:09.280113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.956 [2024-07-15 14:00:09.280132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-15 14:00:09.290055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.956 [2024-07-15 14:00:09.290128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.956 [2024-07-15 14:00:09.290145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.956 [2024-07-15 14:00:09.290152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.956 [2024-07-15 14:00:09.290159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.956 [2024-07-15 14:00:09.290174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-15 14:00:09.300053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.956 [2024-07-15 14:00:09.300169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.956 [2024-07-15 14:00:09.300185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.956 [2024-07-15 14:00:09.300193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.956 [2024-07-15 14:00:09.300199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.956 [2024-07-15 14:00:09.300214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-15 14:00:09.310095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.956 [2024-07-15 14:00:09.310172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.956 [2024-07-15 14:00:09.310188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.956 [2024-07-15 14:00:09.310195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.956 [2024-07-15 14:00:09.310203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.956 [2024-07-15 14:00:09.310217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-15 14:00:09.320137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.956 [2024-07-15 14:00:09.320211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.956 [2024-07-15 14:00:09.320227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.956 [2024-07-15 14:00:09.320235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.956 [2024-07-15 14:00:09.320246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.956 [2024-07-15 14:00:09.320261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-15 14:00:09.330173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.956 [2024-07-15 14:00:09.330255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.956 [2024-07-15 14:00:09.330270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.956 [2024-07-15 14:00:09.330278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.956 [2024-07-15 14:00:09.330284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.956 [2024-07-15 14:00:09.330299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-15 14:00:09.340168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.956 [2024-07-15 14:00:09.340333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.956 [2024-07-15 14:00:09.340350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.956 [2024-07-15 14:00:09.340357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.956 [2024-07-15 14:00:09.340364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.956 [2024-07-15 14:00:09.340378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-15 14:00:09.350193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.956 [2024-07-15 14:00:09.350266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.956 [2024-07-15 14:00:09.350282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.956 [2024-07-15 14:00:09.350289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.956 [2024-07-15 14:00:09.350296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.956 [2024-07-15 14:00:09.350310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-15 14:00:09.360232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.956 [2024-07-15 14:00:09.360306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.956 [2024-07-15 14:00:09.360322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.956 [2024-07-15 14:00:09.360329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.956 [2024-07-15 14:00:09.360335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.956 [2024-07-15 14:00:09.360350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-15 14:00:09.370263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.956 [2024-07-15 14:00:09.370333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.956 [2024-07-15 14:00:09.370349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.956 [2024-07-15 14:00:09.370356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.956 [2024-07-15 14:00:09.370362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.957 [2024-07-15 14:00:09.370377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-15 14:00:09.380308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.957 [2024-07-15 14:00:09.380382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.957 [2024-07-15 14:00:09.380398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.957 [2024-07-15 14:00:09.380407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.957 [2024-07-15 14:00:09.380414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.957 [2024-07-15 14:00:09.380427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-15 14:00:09.390295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.957 [2024-07-15 14:00:09.390372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.957 [2024-07-15 14:00:09.390388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.957 [2024-07-15 14:00:09.390395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.957 [2024-07-15 14:00:09.390401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.957 [2024-07-15 14:00:09.390415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-15 14:00:09.400341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.957 [2024-07-15 14:00:09.400409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.957 [2024-07-15 14:00:09.400426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.957 [2024-07-15 14:00:09.400433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.957 [2024-07-15 14:00:09.400440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.957 [2024-07-15 14:00:09.400454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-15 14:00:09.410404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.957 [2024-07-15 14:00:09.410488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.957 [2024-07-15 14:00:09.410503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.957 [2024-07-15 14:00:09.410511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.957 [2024-07-15 14:00:09.410522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.957 [2024-07-15 14:00:09.410536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-15 14:00:09.420393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.957 [2024-07-15 14:00:09.420471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.957 [2024-07-15 14:00:09.420487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.957 [2024-07-15 14:00:09.420494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.957 [2024-07-15 14:00:09.420502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.957 [2024-07-15 14:00:09.420516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-15 14:00:09.430411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.957 [2024-07-15 14:00:09.430482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.957 [2024-07-15 14:00:09.430497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.957 [2024-07-15 14:00:09.430504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.957 [2024-07-15 14:00:09.430512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.957 [2024-07-15 14:00:09.430527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-15 14:00:09.440430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.957 [2024-07-15 14:00:09.440502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.957 [2024-07-15 14:00:09.440518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.957 [2024-07-15 14:00:09.440526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.957 [2024-07-15 14:00:09.440533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.957 [2024-07-15 14:00:09.440547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-15 14:00:09.450546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.957 [2024-07-15 14:00:09.450637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.957 [2024-07-15 14:00:09.450654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.957 [2024-07-15 14:00:09.450662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.957 [2024-07-15 14:00:09.450668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.957 [2024-07-15 14:00:09.450683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-15 14:00:09.460511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.957 [2024-07-15 14:00:09.460586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.957 [2024-07-15 14:00:09.460602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.957 [2024-07-15 14:00:09.460609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.957 [2024-07-15 14:00:09.460616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.957 [2024-07-15 14:00:09.460631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-15 14:00:09.470576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.957 [2024-07-15 14:00:09.470669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.957 [2024-07-15 14:00:09.470686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.957 [2024-07-15 14:00:09.470693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.957 [2024-07-15 14:00:09.470700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:42.957 [2024-07-15 14:00:09.470714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.957 qpair failed and we were unable to recover it. 00:29:43.219 [2024-07-15 14:00:09.480610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.219 [2024-07-15 14:00:09.480694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.219 [2024-07-15 14:00:09.480710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.219 [2024-07-15 14:00:09.480718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.219 [2024-07-15 14:00:09.480724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:43.219 [2024-07-15 14:00:09.480738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.219 qpair failed and we were unable to recover it. 00:29:43.219 [2024-07-15 14:00:09.490467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.219 [2024-07-15 14:00:09.490542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.219 [2024-07-15 14:00:09.490558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.219 [2024-07-15 14:00:09.490565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.219 [2024-07-15 14:00:09.490572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:43.219 [2024-07-15 14:00:09.490586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.219 qpair failed and we were unable to recover it. 00:29:43.219 [2024-07-15 14:00:09.500596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.219 [2024-07-15 14:00:09.500678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.219 [2024-07-15 14:00:09.500695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.219 [2024-07-15 14:00:09.500704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.219 [2024-07-15 14:00:09.500714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:43.219 [2024-07-15 14:00:09.500728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.219 qpair failed and we were unable to recover it. 00:29:43.219 [2024-07-15 14:00:09.510677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.219 [2024-07-15 14:00:09.510757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.219 [2024-07-15 14:00:09.510773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.219 [2024-07-15 14:00:09.510781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.219 [2024-07-15 14:00:09.510788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:43.219 [2024-07-15 14:00:09.510802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.219 qpair failed and we were unable to recover it. 00:29:43.219 [2024-07-15 14:00:09.520648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.219 [2024-07-15 14:00:09.520723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.219 [2024-07-15 14:00:09.520740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.219 [2024-07-15 14:00:09.520747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.219 [2024-07-15 14:00:09.520754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:43.219 [2024-07-15 14:00:09.520769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.219 qpair failed and we were unable to recover it. 00:29:43.219 [2024-07-15 14:00:09.530660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.219 [2024-07-15 14:00:09.530742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.219 [2024-07-15 14:00:09.530768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.219 [2024-07-15 14:00:09.530777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.219 [2024-07-15 14:00:09.530784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:43.219 [2024-07-15 14:00:09.530804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.219 qpair failed and we were unable to recover it. 00:29:43.219 [2024-07-15 14:00:09.540693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.219 [2024-07-15 14:00:09.540775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.219 [2024-07-15 14:00:09.540801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.219 [2024-07-15 14:00:09.540810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.219 [2024-07-15 14:00:09.540817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:43.219 [2024-07-15 14:00:09.540836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.219 qpair failed and we were unable to recover it. 00:29:43.219 [2024-07-15 14:00:09.550709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.219 [2024-07-15 14:00:09.550787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.219 [2024-07-15 14:00:09.550813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.219 [2024-07-15 14:00:09.550822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.219 [2024-07-15 14:00:09.550829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:43.219 [2024-07-15 14:00:09.550848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.219 qpair failed and we were unable to recover it. 00:29:43.219 [2024-07-15 14:00:09.560744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.219 [2024-07-15 14:00:09.560821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.219 [2024-07-15 14:00:09.560847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.219 [2024-07-15 14:00:09.560857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.219 [2024-07-15 14:00:09.560864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:43.219 [2024-07-15 14:00:09.560883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.219 qpair failed and we were unable to recover it. 00:29:43.219 [2024-07-15 14:00:09.570861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.219 [2024-07-15 14:00:09.570974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.219 [2024-07-15 14:00:09.570992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.219 [2024-07-15 14:00:09.570999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.219 [2024-07-15 14:00:09.571006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:43.219 [2024-07-15 14:00:09.571022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.219 qpair failed and we were unable to recover it. 00:29:43.219 [2024-07-15 14:00:09.580762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.219 [2024-07-15 14:00:09.580882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.219 [2024-07-15 14:00:09.580899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.219 [2024-07-15 14:00:09.580907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.219 [2024-07-15 14:00:09.580913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:43.219 [2024-07-15 14:00:09.580928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.219 qpair failed and we were unable to recover it. 00:29:43.219 [2024-07-15 14:00:09.590820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.220 [2024-07-15 14:00:09.590934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.220 [2024-07-15 14:00:09.590950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.220 [2024-07-15 14:00:09.590962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.220 [2024-07-15 14:00:09.590969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:43.220 [2024-07-15 14:00:09.590984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.220 qpair failed and we were unable to recover it. 00:29:43.220 [2024-07-15 14:00:09.600942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.220 [2024-07-15 14:00:09.601049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.220 [2024-07-15 14:00:09.601065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.220 [2024-07-15 14:00:09.601072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.220 [2024-07-15 14:00:09.601079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d220 00:29:43.220 [2024-07-15 14:00:09.601094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.220 qpair failed and we were unable to recover it. 00:29:43.220 [2024-07-15 14:00:09.610987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.220 [2024-07-15 14:00:09.611188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.220 [2024-07-15 14:00:09.611257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.220 [2024-07-15 14:00:09.611283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.220 [2024-07-15 14:00:09.611304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab04000b90 00:29:43.220 [2024-07-15 14:00:09.611357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.220 qpair failed and we were unable to recover it. 00:29:43.220 [2024-07-15 14:00:09.620970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.220 [2024-07-15 14:00:09.621099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.220 [2024-07-15 14:00:09.621141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.220 [2024-07-15 14:00:09.621157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.220 [2024-07-15 14:00:09.621171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fab04000b90 00:29:43.220 [2024-07-15 14:00:09.621202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.220 qpair failed and we were unable to recover it. 00:29:43.220 [2024-07-15 14:00:09.631035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.220 [2024-07-15 14:00:09.631222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.220 [2024-07-15 14:00:09.631286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.220 [2024-07-15 14:00:09.631311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.220 [2024-07-15 14:00:09.631331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faaf4000b90 00:29:43.220 [2024-07-15 14:00:09.631383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.220 qpair failed and we were unable to recover it. 00:29:43.220 [2024-07-15 14:00:09.641056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.220 [2024-07-15 14:00:09.641231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.220 [2024-07-15 14:00:09.641276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.220 [2024-07-15 14:00:09.641297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.220 [2024-07-15 14:00:09.641316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faaf4000b90 00:29:43.220 [2024-07-15 14:00:09.641360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:43.220 qpair failed and we were unable to recover it. 00:29:43.220 [2024-07-15 14:00:09.641526] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:43.220 A controller has encountered a failure and is being reset. 00:29:43.220 [2024-07-15 14:00:09.641633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79af20 (9): Bad file descriptor 00:29:43.481 Controller properly reset. 00:29:43.481 Initializing NVMe Controllers 00:29:43.481 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:43.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:43.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:43.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:43.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:43.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:43.481 Initialization complete. Launching workers. 00:29:43.481 Starting thread on core 1 00:29:43.481 Starting thread on core 2 00:29:43.481 Starting thread on core 3 00:29:43.481 Starting thread on core 0 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:43.481 00:29:43.481 real 0m11.505s 00:29:43.481 user 0m21.235s 00:29:43.481 sys 0m3.903s 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.481 ************************************ 00:29:43.481 END TEST nvmf_target_disconnect_tc2 00:29:43.481 ************************************ 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:43.481 rmmod nvme_tcp 00:29:43.481 rmmod nvme_fabrics 00:29:43.481 rmmod nvme_keyring 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1279036 ']' 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1279036 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1279036 ']' 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1279036 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1279036 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1279036' 00:29:43.481 killing process with pid 1279036 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1279036 00:29:43.481 14:00:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1279036 00:29:43.741 14:00:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:43.741 14:00:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:43.741 14:00:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:43.741 14:00:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:43.741 14:00:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:43.741 14:00:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.741 14:00:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:43.741 14:00:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.656 14:00:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:45.656 00:29:45.656 real 0m21.437s 00:29:45.656 user 0m49.413s 00:29:45.656 sys 0m9.655s 00:29:45.656 14:00:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:45.656 14:00:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:45.656 ************************************ 00:29:45.656 END TEST nvmf_target_disconnect 00:29:45.656 ************************************ 00:29:45.916 14:00:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:45.916 14:00:12 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:29:45.916 14:00:12 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:45.916 14:00:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:45.916 14:00:12 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:29:45.916 00:29:45.916 real 22m44.386s 00:29:45.916 user 47m28.400s 00:29:45.916 sys 7m11.127s 00:29:45.917 14:00:12 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:45.917 14:00:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:45.917 ************************************ 00:29:45.917 END TEST nvmf_tcp 00:29:45.917 ************************************ 00:29:45.917 14:00:12 -- common/autotest_common.sh@1142 -- # return 0 00:29:45.917 14:00:12 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:45.917 14:00:12 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:45.917 14:00:12 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:45.917 14:00:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:45.917 14:00:12 -- common/autotest_common.sh@10 -- # set +x 00:29:45.917 ************************************ 00:29:45.917 START TEST spdkcli_nvmf_tcp 00:29:45.917 ************************************ 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:45.917 * Looking for test storage... 00:29:45.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.917 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1281326 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1281326 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1281326 ']' 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:46.178 14:00:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:46.178 [2024-07-15 14:00:12.505049] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:29:46.178 [2024-07-15 14:00:12.505098] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281326 ] 00:29:46.178 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.178 [2024-07-15 14:00:12.565630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:46.178 [2024-07-15 14:00:12.631334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.178 [2024-07-15 14:00:12.631421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.747 14:00:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:46.747 14:00:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:29:46.747 14:00:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:46.747 14:00:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:46.747 14:00:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:47.007 14:00:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:47.008 14:00:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:47.008 14:00:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:47.008 14:00:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:47.008 14:00:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:47.008 14:00:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:47.008 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:47.008 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:47.008 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:47.008 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:47.008 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:47.008 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:47.008 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:47.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:47.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:47.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:47.008 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:47.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:47.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:47.008 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:47.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:47.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:47.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:47.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:47.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:47.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:47.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:47.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:47.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:47.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:47.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:47.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:47.008 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:47.008 ' 00:29:49.550 [2024-07-15 14:00:15.622757] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.491 [2024-07-15 14:00:16.786507] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:52.403 [2024-07-15 14:00:18.924702] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:54.337 [2024-07-15 14:00:20.762308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:55.721 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:55.721 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:55.721 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:55.721 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:55.721 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:55.721 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:55.721 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:55.721 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:55.721 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:55.721 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:55.721 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:55.721 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:55.721 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:55.721 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:55.721 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:55.721 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:55.721 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:55.721 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:55.721 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:55.721 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:55.721 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:55.721 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:55.721 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:55.721 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:55.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:55.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:55.722 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:55.722 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:55.983 14:00:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:55.983 14:00:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:55.983 14:00:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:55.983 14:00:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:55.983 14:00:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:55.983 14:00:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:55.983 14:00:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:55.983 14:00:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:56.244 14:00:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:56.244 14:00:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:56.244 14:00:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:56.244 14:00:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:56.244 14:00:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:56.244 14:00:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:56.244 14:00:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:56.244 14:00:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:56.244 14:00:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:56.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:56.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:56.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:56.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:56.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:56.244 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:56.244 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:56.244 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:56.244 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:56.244 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:56.244 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:56.244 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:56.244 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:56.244 ' 00:30:01.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:01.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:01.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:01.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:01.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:01.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:01.537 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:01.538 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:01.538 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:01.538 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:01.538 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:01.538 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:01.538 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:01.538 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1281326 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1281326 ']' 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1281326 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1281326 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1281326' 00:30:01.538 killing process with pid 1281326 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1281326 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1281326 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1281326 ']' 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1281326 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1281326 ']' 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1281326 00:30:01.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1281326) - No such process 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1281326 is not found' 00:30:01.538 Process with pid 1281326 is not found 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:01.538 00:30:01.538 real 0m15.528s 00:30:01.538 user 0m32.002s 00:30:01.538 sys 0m0.693s 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:01.538 14:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:01.538 ************************************ 00:30:01.538 END TEST spdkcli_nvmf_tcp 00:30:01.538 ************************************ 00:30:01.538 14:00:27 -- common/autotest_common.sh@1142 -- # return 0 00:30:01.538 14:00:27 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:01.538 14:00:27 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:01.538 14:00:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:01.538 14:00:27 -- common/autotest_common.sh@10 -- # set +x 00:30:01.538 ************************************ 00:30:01.538 START TEST nvmf_identify_passthru 00:30:01.538 ************************************ 00:30:01.538 14:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:01.538 * Looking for test storage... 00:30:01.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:01.538 14:00:28 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.538 14:00:28 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.538 14:00:28 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.538 14:00:28 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.538 14:00:28 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.538 14:00:28 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.538 14:00:28 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.538 14:00:28 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:01.538 14:00:28 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:01.538 14:00:28 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.538 14:00:28 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.538 14:00:28 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.538 14:00:28 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.538 14:00:28 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.538 14:00:28 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.538 14:00:28 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.538 14:00:28 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:01.538 14:00:28 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.538 14:00:28 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:01.538 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.538 14:00:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:01.538 14:00:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.811 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:01.811 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:01.811 14:00:28 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:01.811 14:00:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:08.405 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:08.405 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:08.405 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:08.405 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.405 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.667 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.667 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.667 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:08.667 14:00:34 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.667 14:00:35 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.667 14:00:35 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.667 14:00:35 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:08.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:30:08.667 00:30:08.667 --- 10.0.0.2 ping statistics --- 00:30:08.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.667 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:30:08.667 14:00:35 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.398 ms 00:30:08.667 00:30:08.667 --- 10.0.0.1 ping statistics --- 00:30:08.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.667 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:30:08.667 14:00:35 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.667 14:00:35 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:08.667 14:00:35 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:08.667 14:00:35 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.667 14:00:35 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:08.667 14:00:35 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:08.667 14:00:35 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.667 14:00:35 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:08.667 14:00:35 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:08.667 14:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:08.667 14:00:35 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:08.667 14:00:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:08.667 14:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:08.667 14:00:35 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:08.667 14:00:35 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:08.667 14:00:35 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:08.667 14:00:35 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:08.667 14:00:35 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:08.667 14:00:35 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:08.667 14:00:35 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:08.667 14:00:35 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:08.667 14:00:35 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:08.928 14:00:35 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:08.928 14:00:35 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:30:08.928 14:00:35 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:30:08.928 14:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:08.928 14:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:08.928 14:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:08.928 14:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:08.928 14:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:08.928 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.529 14:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:30:09.529 14:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:09.529 14:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:09.529 14:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:09.529 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.794 14:00:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:30:09.794 14:00:36 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:09.794 14:00:36 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:09.794 14:00:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:09.794 14:00:36 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:09.794 14:00:36 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:09.794 14:00:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:09.794 14:00:36 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1288164 00:30:09.794 14:00:36 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:09.794 14:00:36 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:09.794 14:00:36 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1288164 00:30:09.794 14:00:36 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1288164 ']' 00:30:09.794 14:00:36 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.794 14:00:36 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:09.795 14:00:36 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.795 14:00:36 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:09.795 14:00:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:09.795 [2024-07-15 14:00:36.309699] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:30:09.795 [2024-07-15 14:00:36.309751] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:10.055 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.055 [2024-07-15 14:00:36.374974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:10.055 [2024-07-15 14:00:36.442560] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:10.055 [2024-07-15 14:00:36.442601] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:10.055 [2024-07-15 14:00:36.442609] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:10.055 [2024-07-15 14:00:36.442615] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:10.055 [2024-07-15 14:00:36.442621] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:10.055 [2024-07-15 14:00:36.442753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.055 [2024-07-15 14:00:36.442870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:10.055 [2024-07-15 14:00:36.443026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.055 [2024-07-15 14:00:36.443028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:10.625 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:10.625 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:30:10.625 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:10.625 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.625 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:10.625 INFO: Log level set to 20 00:30:10.625 INFO: Requests: 00:30:10.625 { 00:30:10.625 "jsonrpc": "2.0", 00:30:10.625 "method": "nvmf_set_config", 00:30:10.625 "id": 1, 00:30:10.625 "params": { 00:30:10.625 "admin_cmd_passthru": { 00:30:10.625 "identify_ctrlr": true 00:30:10.625 } 00:30:10.625 } 00:30:10.625 } 00:30:10.625 00:30:10.625 INFO: response: 00:30:10.626 { 00:30:10.626 "jsonrpc": "2.0", 00:30:10.626 "id": 1, 00:30:10.626 "result": true 00:30:10.626 } 00:30:10.626 00:30:10.626 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.626 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:10.626 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.626 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:10.626 INFO: Setting log level to 20 00:30:10.626 INFO: Setting log level to 20 00:30:10.626 INFO: Log level set to 20 00:30:10.626 INFO: Log level set to 20 00:30:10.626 INFO: Requests: 00:30:10.626 { 00:30:10.626 "jsonrpc": "2.0", 00:30:10.626 "method": "framework_start_init", 00:30:10.626 "id": 1 00:30:10.626 } 00:30:10.626 00:30:10.626 INFO: Requests: 00:30:10.626 { 00:30:10.626 "jsonrpc": "2.0", 00:30:10.626 "method": "framework_start_init", 00:30:10.626 "id": 1 00:30:10.626 } 00:30:10.626 00:30:10.886 [2024-07-15 14:00:37.159558] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:10.886 INFO: response: 00:30:10.886 { 00:30:10.886 "jsonrpc": "2.0", 00:30:10.886 "id": 1, 00:30:10.886 "result": true 00:30:10.886 } 00:30:10.886 00:30:10.886 INFO: response: 00:30:10.886 { 00:30:10.886 "jsonrpc": "2.0", 00:30:10.886 "id": 1, 00:30:10.886 "result": true 00:30:10.886 } 00:30:10.886 00:30:10.886 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.886 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:10.886 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.886 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:10.886 INFO: Setting log level to 40 00:30:10.886 INFO: Setting log level to 40 00:30:10.886 INFO: Setting log level to 40 00:30:10.886 [2024-07-15 14:00:37.172872] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.886 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.886 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:10.886 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:10.886 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:10.886 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:30:10.886 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.886 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:11.147 Nvme0n1 00:30:11.147 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.147 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:11.147 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.147 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:11.147 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.147 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:11.147 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.148 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:11.148 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.148 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.148 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.148 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:11.148 [2024-07-15 14:00:37.557384] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.148 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.148 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:11.148 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.148 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:11.148 [ 00:30:11.148 { 00:30:11.148 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:11.148 "subtype": "Discovery", 00:30:11.148 "listen_addresses": [], 00:30:11.148 "allow_any_host": true, 00:30:11.148 "hosts": [] 00:30:11.148 }, 00:30:11.148 { 00:30:11.148 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:11.148 "subtype": "NVMe", 00:30:11.148 "listen_addresses": [ 00:30:11.148 { 00:30:11.148 "trtype": "TCP", 00:30:11.148 "adrfam": "IPv4", 00:30:11.148 "traddr": "10.0.0.2", 00:30:11.148 "trsvcid": "4420" 00:30:11.148 } 00:30:11.148 ], 00:30:11.148 "allow_any_host": true, 00:30:11.148 "hosts": [], 00:30:11.148 "serial_number": "SPDK00000000000001", 00:30:11.148 "model_number": "SPDK bdev Controller", 00:30:11.148 "max_namespaces": 1, 00:30:11.148 "min_cntlid": 1, 00:30:11.148 "max_cntlid": 65519, 00:30:11.148 "namespaces": [ 00:30:11.148 { 00:30:11.148 "nsid": 1, 00:30:11.148 "bdev_name": "Nvme0n1", 00:30:11.148 "name": "Nvme0n1", 00:30:11.148 "nguid": "36344730526054870025384500000044", 00:30:11.148 "uuid": "36344730-5260-5487-0025-384500000044" 00:30:11.148 } 00:30:11.148 ] 00:30:11.148 } 00:30:11.148 ] 00:30:11.148 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.148 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:11.148 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:11.148 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:11.148 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.408 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:30:11.408 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:11.408 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:11.408 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:11.408 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.408 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:30:11.408 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:30:11.408 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:30:11.408 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:11.408 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.408 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:11.408 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.408 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:11.408 14:00:37 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:11.408 14:00:37 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:11.408 14:00:37 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:11.409 14:00:37 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:11.409 14:00:37 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:11.409 14:00:37 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:11.409 14:00:37 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:11.409 rmmod nvme_tcp 00:30:11.409 rmmod nvme_fabrics 00:30:11.409 rmmod nvme_keyring 00:30:11.409 14:00:37 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:11.669 14:00:37 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:11.669 14:00:37 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:11.669 14:00:37 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1288164 ']' 00:30:11.669 14:00:37 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1288164 00:30:11.669 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1288164 ']' 00:30:11.669 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1288164 00:30:11.669 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:30:11.669 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:11.669 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1288164 00:30:11.669 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:11.669 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:11.669 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1288164' 00:30:11.669 killing process with pid 1288164 00:30:11.669 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1288164 00:30:11.669 14:00:37 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1288164 00:30:11.930 14:00:38 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:11.930 14:00:38 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:11.930 14:00:38 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:11.930 14:00:38 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:11.930 14:00:38 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:11.930 14:00:38 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.930 14:00:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:11.930 14:00:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.839 14:00:40 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:13.839 00:30:13.839 real 0m12.405s 00:30:13.839 user 0m9.659s 00:30:13.839 sys 0m5.891s 00:30:13.839 14:00:40 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:13.839 14:00:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:13.839 ************************************ 00:30:13.839 END TEST nvmf_identify_passthru 00:30:13.839 ************************************ 00:30:14.100 14:00:40 -- common/autotest_common.sh@1142 -- # return 0 00:30:14.100 14:00:40 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:14.100 14:00:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:14.100 14:00:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:14.100 14:00:40 -- common/autotest_common.sh@10 -- # set +x 00:30:14.100 ************************************ 00:30:14.100 START TEST nvmf_dif 00:30:14.100 ************************************ 00:30:14.100 14:00:40 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:14.100 * Looking for test storage... 00:30:14.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:14.100 14:00:40 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.100 14:00:40 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.100 14:00:40 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.100 14:00:40 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.100 14:00:40 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.100 14:00:40 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.100 14:00:40 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.100 14:00:40 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:14.100 14:00:40 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:14.100 14:00:40 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:14.100 14:00:40 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:14.100 14:00:40 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:14.100 14:00:40 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:14.100 14:00:40 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.100 14:00:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:14.100 14:00:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:14.100 14:00:40 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:14.100 14:00:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:22.239 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:22.239 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:22.239 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:22.239 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:22.239 14:00:47 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:22.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:22.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:30:22.239 00:30:22.239 --- 10.0.0.2 ping statistics --- 00:30:22.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.239 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:30:22.240 14:00:47 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:22.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:22.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:30:22.240 00:30:22.240 --- 10.0.0.1 ping statistics --- 00:30:22.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.240 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:30:22.240 14:00:47 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:22.240 14:00:47 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:22.240 14:00:47 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:22.240 14:00:47 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:24.786 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:24.786 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:24.786 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:24.786 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:24.786 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:24.786 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:24.786 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:24.787 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:24.787 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:24.787 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:24.787 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:24.787 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:24.787 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:24.787 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:24.787 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:24.787 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:24.787 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:24.787 14:00:51 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:24.787 14:00:51 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:24.787 14:00:51 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:24.787 14:00:51 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:24.787 14:00:51 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:24.787 14:00:51 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:24.787 14:00:51 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:24.787 14:00:51 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:24.787 14:00:51 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:24.787 14:00:51 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:24.787 14:00:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:24.787 14:00:51 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1294188 00:30:24.787 14:00:51 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1294188 00:30:24.787 14:00:51 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:24.787 14:00:51 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1294188 ']' 00:30:24.787 14:00:51 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.787 14:00:51 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:24.787 14:00:51 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.787 14:00:51 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:24.787 14:00:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:24.787 [2024-07-15 14:00:51.239273] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:30:24.787 [2024-07-15 14:00:51.239327] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.787 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.787 [2024-07-15 14:00:51.307215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.047 [2024-07-15 14:00:51.377412] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:25.047 [2024-07-15 14:00:51.377449] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:25.047 [2024-07-15 14:00:51.377457] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:25.047 [2024-07-15 14:00:51.377467] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:25.047 [2024-07-15 14:00:51.377473] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:25.047 [2024-07-15 14:00:51.377490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.624 14:00:51 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:25.624 14:00:51 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:30:25.624 14:00:51 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:25.624 14:00:51 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:25.624 14:00:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:25.624 14:00:52 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:25.624 14:00:52 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:25.624 14:00:52 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:25.624 14:00:52 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.624 14:00:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:25.624 [2024-07-15 14:00:52.047917] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:25.624 14:00:52 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.624 14:00:52 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:25.624 14:00:52 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:25.624 14:00:52 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:25.624 14:00:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:25.624 ************************************ 00:30:25.624 START TEST fio_dif_1_default 00:30:25.624 ************************************ 00:30:25.624 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:30:25.624 14:00:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:25.624 14:00:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:25.624 14:00:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:25.624 14:00:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:25.624 14:00:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:25.624 14:00:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:25.624 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.624 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:25.624 bdev_null0 00:30:25.624 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.624 14:00:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:25.624 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.624 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:25.625 [2024-07-15 14:00:52.132239] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:25.625 { 00:30:25.625 "params": { 00:30:25.625 "name": "Nvme$subsystem", 00:30:25.625 "trtype": "$TEST_TRANSPORT", 00:30:25.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.625 "adrfam": "ipv4", 00:30:25.625 "trsvcid": "$NVMF_PORT", 00:30:25.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.625 "hdgst": ${hdgst:-false}, 00:30:25.625 "ddgst": ${ddgst:-false} 00:30:25.625 }, 00:30:25.625 "method": "bdev_nvme_attach_controller" 00:30:25.625 } 00:30:25.625 EOF 00:30:25.625 )") 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:25.625 14:00:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:25.885 14:00:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:25.885 "params": { 00:30:25.885 "name": "Nvme0", 00:30:25.885 "trtype": "tcp", 00:30:25.885 "traddr": "10.0.0.2", 00:30:25.885 "adrfam": "ipv4", 00:30:25.885 "trsvcid": "4420", 00:30:25.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:25.885 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:25.885 "hdgst": false, 00:30:25.885 "ddgst": false 00:30:25.885 }, 00:30:25.885 "method": "bdev_nvme_attach_controller" 00:30:25.885 }' 00:30:25.885 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:25.885 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:25.885 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:25.885 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:25.886 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:25.886 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:25.886 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:25.886 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:25.886 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:25.886 14:00:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:26.145 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:26.145 fio-3.35 00:30:26.145 Starting 1 thread 00:30:26.145 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.439 00:30:38.439 filename0: (groupid=0, jobs=1): err= 0: pid=1294699: Mon Jul 15 14:01:03 2024 00:30:38.439 read: IOPS=185, BW=741KiB/s (758kB/s)(7424KiB/10023msec) 00:30:38.439 slat (nsec): min=2924, max=16277, avg=5521.72, stdev=540.30 00:30:38.439 clat (usec): min=1004, max=47844, avg=21585.97, stdev=20152.49 00:30:38.439 lat (usec): min=1010, max=47854, avg=21591.49, stdev=20152.45 00:30:38.439 clat percentiles (usec): 00:30:38.439 | 1.00th=[ 1270], 5.00th=[ 1303], 10.00th=[ 1336], 20.00th=[ 1352], 00:30:38.439 | 30.00th=[ 1369], 40.00th=[ 1385], 50.00th=[41681], 60.00th=[41681], 00:30:38.439 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:38.439 | 99.00th=[41681], 99.50th=[41681], 99.90th=[47973], 99.95th=[47973], 00:30:38.439 | 99.99th=[47973] 00:30:38.439 bw ( KiB/s): min= 672, max= 768, per=99.91%, avg=740.80, stdev=34.86, samples=20 00:30:38.439 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:30:38.439 lat (msec) : 2=49.78%, 50=50.22% 00:30:38.439 cpu : usr=95.21%, sys=4.61%, ctx=21, majf=0, minf=222 00:30:38.439 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:38.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.439 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.439 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:38.439 00:30:38.439 Run status group 0 (all jobs): 00:30:38.439 READ: bw=741KiB/s (758kB/s), 741KiB/s-741KiB/s (758kB/s-758kB/s), io=7424KiB (7602kB), run=10023-10023msec 00:30:38.439 14:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:38.439 14:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:38.439 14:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:38.439 14:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:38.439 14:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:38.439 14:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:38.439 14:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.439 14:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.440 00:30:38.440 real 0m11.133s 00:30:38.440 user 0m25.135s 00:30:38.440 sys 0m0.754s 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:38.440 ************************************ 00:30:38.440 END TEST fio_dif_1_default 00:30:38.440 ************************************ 00:30:38.440 14:01:03 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:38.440 14:01:03 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:38.440 14:01:03 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:38.440 14:01:03 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:38.440 14:01:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:38.440 ************************************ 00:30:38.440 START TEST fio_dif_1_multi_subsystems 00:30:38.440 ************************************ 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:38.440 bdev_null0 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:38.440 [2024-07-15 14:01:03.343769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:38.440 bdev_null1 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:38.440 { 00:30:38.440 "params": { 00:30:38.440 "name": "Nvme$subsystem", 00:30:38.440 "trtype": "$TEST_TRANSPORT", 00:30:38.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.440 "adrfam": "ipv4", 00:30:38.440 "trsvcid": "$NVMF_PORT", 00:30:38.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.440 "hdgst": ${hdgst:-false}, 00:30:38.440 "ddgst": ${ddgst:-false} 00:30:38.440 }, 00:30:38.440 "method": "bdev_nvme_attach_controller" 00:30:38.440 } 00:30:38.440 EOF 00:30:38.440 )") 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:38.440 { 00:30:38.440 "params": { 00:30:38.440 "name": "Nvme$subsystem", 00:30:38.440 "trtype": "$TEST_TRANSPORT", 00:30:38.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.440 "adrfam": "ipv4", 00:30:38.440 "trsvcid": "$NVMF_PORT", 00:30:38.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.440 "hdgst": ${hdgst:-false}, 00:30:38.440 "ddgst": ${ddgst:-false} 00:30:38.440 }, 00:30:38.440 "method": "bdev_nvme_attach_controller" 00:30:38.440 } 00:30:38.440 EOF 00:30:38.440 )") 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:38.440 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:38.440 "params": { 00:30:38.440 "name": "Nvme0", 00:30:38.440 "trtype": "tcp", 00:30:38.440 "traddr": "10.0.0.2", 00:30:38.440 "adrfam": "ipv4", 00:30:38.440 "trsvcid": "4420", 00:30:38.440 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:38.440 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:38.440 "hdgst": false, 00:30:38.440 "ddgst": false 00:30:38.440 }, 00:30:38.440 "method": "bdev_nvme_attach_controller" 00:30:38.440 },{ 00:30:38.440 "params": { 00:30:38.440 "name": "Nvme1", 00:30:38.440 "trtype": "tcp", 00:30:38.440 "traddr": "10.0.0.2", 00:30:38.440 "adrfam": "ipv4", 00:30:38.440 "trsvcid": "4420", 00:30:38.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:38.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:38.440 "hdgst": false, 00:30:38.440 "ddgst": false 00:30:38.440 }, 00:30:38.440 "method": "bdev_nvme_attach_controller" 00:30:38.440 }' 00:30:38.441 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:38.441 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:38.441 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:38.441 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:38.441 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:38.441 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:38.441 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:38.441 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:38.441 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:38.441 14:01:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:38.441 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:38.441 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:38.441 fio-3.35 00:30:38.441 Starting 2 threads 00:30:38.441 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.439 00:30:48.439 filename0: (groupid=0, jobs=1): err= 0: pid=1296969: Mon Jul 15 14:01:14 2024 00:30:48.440 read: IOPS=184, BW=739KiB/s (757kB/s)(7408KiB/10023msec) 00:30:48.440 slat (nsec): min=5405, max=38504, avg=7620.01, stdev=4079.98 00:30:48.440 clat (usec): min=693, max=43830, avg=21626.73, stdev=20198.39 00:30:48.440 lat (usec): min=699, max=43868, avg=21634.35, stdev=20197.82 00:30:48.440 clat percentiles (usec): 00:30:48.440 | 1.00th=[ 1037], 5.00th=[ 1237], 10.00th=[ 1303], 20.00th=[ 1352], 00:30:48.440 | 30.00th=[ 1385], 40.00th=[ 1582], 50.00th=[41157], 60.00th=[41681], 00:30:48.440 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:30:48.440 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:30:48.440 | 99.99th=[43779] 00:30:48.440 bw ( KiB/s): min= 704, max= 768, per=50.06%, avg=739.20, stdev=32.67, samples=20 00:30:48.440 iops : min= 176, max= 192, avg=184.80, stdev= 8.17, samples=20 00:30:48.440 lat (usec) : 750=0.22%, 1000=0.43% 00:30:48.440 lat (msec) : 2=49.24%, 50=50.11% 00:30:48.440 cpu : usr=97.13%, sys=2.64%, ctx=13, majf=0, minf=130 00:30:48.440 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:48.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.440 issued rwts: total=1852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.440 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:48.440 filename1: (groupid=0, jobs=1): err= 0: pid=1296970: Mon Jul 15 14:01:14 2024 00:30:48.440 read: IOPS=184, BW=737KiB/s (755kB/s)(7392KiB/10025msec) 00:30:48.440 slat (nsec): min=5406, max=65769, avg=7313.71, stdev=4276.64 00:30:48.440 clat (usec): min=841, max=43885, avg=21678.92, stdev=20217.42 00:30:48.440 lat (usec): min=847, max=43922, avg=21686.24, stdev=20216.96 00:30:48.440 clat percentiles (usec): 00:30:48.440 | 1.00th=[ 1057], 5.00th=[ 1123], 10.00th=[ 1287], 20.00th=[ 1336], 00:30:48.440 | 30.00th=[ 1369], 40.00th=[ 1598], 50.00th=[41157], 60.00th=[41681], 00:30:48.440 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:30:48.440 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:30:48.440 | 99.99th=[43779] 00:30:48.440 bw ( KiB/s): min= 672, max= 768, per=49.92%, avg=737.60, stdev=35.17, samples=20 00:30:48.440 iops : min= 168, max= 192, avg=184.40, stdev= 8.79, samples=20 00:30:48.440 lat (usec) : 1000=0.54% 00:30:48.440 lat (msec) : 2=49.24%, 50=50.22% 00:30:48.440 cpu : usr=97.39%, sys=2.38%, ctx=13, majf=0, minf=203 00:30:48.440 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:48.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.440 issued rwts: total=1848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.440 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:48.440 00:30:48.440 Run status group 0 (all jobs): 00:30:48.440 READ: bw=1476KiB/s (1512kB/s), 737KiB/s-739KiB/s (755kB/s-757kB/s), io=14.5MiB (15.2MB), run=10023-10025msec 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.440 00:30:48.440 real 0m11.355s 00:30:48.440 user 0m32.882s 00:30:48.440 sys 0m0.882s 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:48.440 14:01:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:48.440 ************************************ 00:30:48.440 END TEST fio_dif_1_multi_subsystems 00:30:48.440 ************************************ 00:30:48.440 14:01:14 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:48.440 14:01:14 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:48.440 14:01:14 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:48.440 14:01:14 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:48.440 14:01:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:48.440 ************************************ 00:30:48.440 START TEST fio_dif_rand_params 00:30:48.440 ************************************ 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.440 bdev_null0 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.440 [2024-07-15 14:01:14.777155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:48.440 14:01:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:48.440 { 00:30:48.440 "params": { 00:30:48.440 "name": "Nvme$subsystem", 00:30:48.440 "trtype": "$TEST_TRANSPORT", 00:30:48.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:48.441 "adrfam": "ipv4", 00:30:48.441 "trsvcid": "$NVMF_PORT", 00:30:48.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:48.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:48.441 "hdgst": ${hdgst:-false}, 00:30:48.441 "ddgst": ${ddgst:-false} 00:30:48.441 }, 00:30:48.441 "method": "bdev_nvme_attach_controller" 00:30:48.441 } 00:30:48.441 EOF 00:30:48.441 )") 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:48.441 "params": { 00:30:48.441 "name": "Nvme0", 00:30:48.441 "trtype": "tcp", 00:30:48.441 "traddr": "10.0.0.2", 00:30:48.441 "adrfam": "ipv4", 00:30:48.441 "trsvcid": "4420", 00:30:48.441 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:48.441 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:48.441 "hdgst": false, 00:30:48.441 "ddgst": false 00:30:48.441 }, 00:30:48.441 "method": "bdev_nvme_attach_controller" 00:30:48.441 }' 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:48.441 14:01:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:48.701 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:48.701 ... 00:30:48.701 fio-3.35 00:30:48.701 Starting 3 threads 00:30:48.962 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.545 00:30:55.545 filename0: (groupid=0, jobs=1): err= 0: pid=1299364: Mon Jul 15 14:01:20 2024 00:30:55.545 read: IOPS=71, BW=9150KiB/s (9370kB/s)(45.1MiB/5050msec) 00:30:55.545 slat (nsec): min=5431, max=65095, avg=7578.95, stdev=3764.81 00:30:55.545 clat (usec): min=9503, max=95357, avg=41814.61, stdev=23740.68 00:30:55.545 lat (usec): min=9512, max=95365, avg=41822.18, stdev=23740.48 00:30:55.545 clat percentiles (usec): 00:30:55.545 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[11076], 20.00th=[12780], 00:30:55.545 | 30.00th=[15008], 40.00th=[50594], 50.00th=[51643], 60.00th=[52167], 00:30:55.545 | 70.00th=[52691], 80.00th=[53216], 90.00th=[54789], 95.00th=[91751], 00:30:55.545 | 99.00th=[93848], 99.50th=[94897], 99.90th=[94897], 99.95th=[94897], 00:30:55.545 | 99.99th=[94897] 00:30:55.545 bw ( KiB/s): min= 6144, max=12544, per=20.79%, avg=9190.40, stdev=2150.16, samples=10 00:30:55.545 iops : min= 48, max= 98, avg=71.80, stdev=16.80, samples=10 00:30:55.545 lat (msec) : 10=2.77%, 20=31.30%, 50=3.60%, 100=62.33% 00:30:55.545 cpu : usr=97.19%, sys=2.57%, ctx=7, majf=0, minf=153 00:30:55.545 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:55.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.545 issued rwts: total=361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.545 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:55.545 filename0: (groupid=0, jobs=1): err= 0: pid=1299365: Mon Jul 15 14:01:20 2024 00:30:55.545 read: IOPS=148, BW=18.6MiB/s (19.5MB/s)(93.0MiB/5001msec) 00:30:55.545 slat (nsec): min=5409, max=31838, avg=7490.31, stdev=1840.52 00:30:55.545 clat (usec): min=6072, max=92715, avg=20152.82, stdev=18586.21 00:30:55.545 lat (usec): min=6078, max=92724, avg=20160.31, stdev=18586.20 00:30:55.545 clat percentiles (usec): 00:30:55.545 | 1.00th=[ 6587], 5.00th=[ 7701], 10.00th=[ 8094], 20.00th=[ 8586], 00:30:55.545 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[11338], 00:30:55.545 | 70.00th=[13042], 80.00th=[50070], 90.00th=[51643], 95.00th=[52691], 00:30:55.545 | 99.00th=[56361], 99.50th=[91751], 99.90th=[92799], 99.95th=[92799], 00:30:55.545 | 99.99th=[92799] 00:30:55.545 bw ( KiB/s): min=15360, max=25344, per=43.63%, avg=19285.33, stdev=3210.22, samples=9 00:30:55.545 iops : min= 120, max= 198, avg=150.67, stdev=25.08, samples=9 00:30:55.545 lat (msec) : 10=43.28%, 20=32.80%, 50=3.49%, 100=20.43% 00:30:55.545 cpu : usr=96.24%, sys=3.48%, ctx=15, majf=0, minf=98 00:30:55.545 IO depths : 1=7.0%, 2=93.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:55.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.545 issued rwts: total=744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.545 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:55.545 filename0: (groupid=0, jobs=1): err= 0: pid=1299367: Mon Jul 15 14:01:20 2024 00:30:55.545 read: IOPS=126, BW=15.9MiB/s (16.6MB/s)(79.9MiB/5034msec) 00:30:55.545 slat (nsec): min=5433, max=35003, avg=7107.99, stdev=2151.82 00:30:55.545 clat (usec): min=5799, max=93619, avg=23615.52, stdev=21570.98 00:30:55.545 lat (usec): min=5804, max=93626, avg=23622.63, stdev=21570.98 00:30:55.545 clat percentiles (usec): 00:30:55.545 | 1.00th=[ 5866], 5.00th=[ 6587], 10.00th=[ 7308], 20.00th=[ 8291], 00:30:55.545 | 30.00th=[ 8848], 40.00th=[ 9896], 50.00th=[10683], 60.00th=[11731], 00:30:55.545 | 70.00th=[49546], 80.00th=[51119], 90.00th=[52167], 95.00th=[53216], 00:30:55.545 | 99.00th=[91751], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:30:55.545 | 99.99th=[93848] 00:30:55.545 bw ( KiB/s): min= 9216, max=21504, per=36.83%, avg=16281.60, stdev=4173.84, samples=10 00:30:55.545 iops : min= 72, max= 168, avg=127.20, stdev=32.61, samples=10 00:30:55.545 lat (msec) : 10=41.47%, 20=26.76%, 50=3.29%, 100=28.48% 00:30:55.545 cpu : usr=96.88%, sys=2.84%, ctx=9, majf=0, minf=132 00:30:55.545 IO depths : 1=4.4%, 2=95.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:55.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.545 issued rwts: total=639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.545 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:55.545 00:30:55.545 Run status group 0 (all jobs): 00:30:55.545 READ: bw=43.2MiB/s (45.3MB/s), 9150KiB/s-18.6MiB/s (9370kB/s-19.5MB/s), io=218MiB (229MB), run=5001-5050msec 00:30:55.545 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:55.545 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.546 bdev_null0 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.546 [2024-07-15 14:01:21.087547] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.546 bdev_null1 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.546 bdev_null2 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:55.546 { 00:30:55.546 "params": { 00:30:55.546 "name": "Nvme$subsystem", 00:30:55.546 "trtype": "$TEST_TRANSPORT", 00:30:55.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:55.546 "adrfam": "ipv4", 00:30:55.546 "trsvcid": "$NVMF_PORT", 00:30:55.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:55.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:55.546 "hdgst": ${hdgst:-false}, 00:30:55.546 "ddgst": ${ddgst:-false} 00:30:55.546 }, 00:30:55.546 "method": "bdev_nvme_attach_controller" 00:30:55.546 } 00:30:55.546 EOF 00:30:55.546 )") 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:55.546 { 00:30:55.546 "params": { 00:30:55.546 "name": "Nvme$subsystem", 00:30:55.546 "trtype": "$TEST_TRANSPORT", 00:30:55.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:55.546 "adrfam": "ipv4", 00:30:55.546 "trsvcid": "$NVMF_PORT", 00:30:55.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:55.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:55.546 "hdgst": ${hdgst:-false}, 00:30:55.546 "ddgst": ${ddgst:-false} 00:30:55.546 }, 00:30:55.546 "method": "bdev_nvme_attach_controller" 00:30:55.546 } 00:30:55.546 EOF 00:30:55.546 )") 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:55.546 14:01:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:55.547 14:01:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:55.547 14:01:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:55.547 { 00:30:55.547 "params": { 00:30:55.547 "name": "Nvme$subsystem", 00:30:55.547 "trtype": "$TEST_TRANSPORT", 00:30:55.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:55.547 "adrfam": "ipv4", 00:30:55.547 "trsvcid": "$NVMF_PORT", 00:30:55.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:55.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:55.547 "hdgst": ${hdgst:-false}, 00:30:55.547 "ddgst": ${ddgst:-false} 00:30:55.547 }, 00:30:55.547 "method": "bdev_nvme_attach_controller" 00:30:55.547 } 00:30:55.547 EOF 00:30:55.547 )") 00:30:55.547 14:01:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:55.547 14:01:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:55.547 14:01:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:55.547 14:01:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:55.547 "params": { 00:30:55.547 "name": "Nvme0", 00:30:55.547 "trtype": "tcp", 00:30:55.547 "traddr": "10.0.0.2", 00:30:55.547 "adrfam": "ipv4", 00:30:55.547 "trsvcid": "4420", 00:30:55.547 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:55.547 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:55.547 "hdgst": false, 00:30:55.547 "ddgst": false 00:30:55.547 }, 00:30:55.547 "method": "bdev_nvme_attach_controller" 00:30:55.547 },{ 00:30:55.547 "params": { 00:30:55.547 "name": "Nvme1", 00:30:55.547 "trtype": "tcp", 00:30:55.547 "traddr": "10.0.0.2", 00:30:55.547 "adrfam": "ipv4", 00:30:55.547 "trsvcid": "4420", 00:30:55.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:55.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:55.547 "hdgst": false, 00:30:55.547 "ddgst": false 00:30:55.547 }, 00:30:55.547 "method": "bdev_nvme_attach_controller" 00:30:55.547 },{ 00:30:55.547 "params": { 00:30:55.547 "name": "Nvme2", 00:30:55.547 "trtype": "tcp", 00:30:55.547 "traddr": "10.0.0.2", 00:30:55.547 "adrfam": "ipv4", 00:30:55.547 "trsvcid": "4420", 00:30:55.547 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:55.547 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:55.547 "hdgst": false, 00:30:55.547 "ddgst": false 00:30:55.547 }, 00:30:55.547 "method": "bdev_nvme_attach_controller" 00:30:55.547 }' 00:30:55.547 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:55.547 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:55.547 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:55.547 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:55.547 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:55.547 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:55.547 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:55.547 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:55.547 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:55.547 14:01:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.547 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:55.547 ... 00:30:55.547 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:55.547 ... 00:30:55.547 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:55.547 ... 00:30:55.547 fio-3.35 00:30:55.547 Starting 24 threads 00:30:55.547 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.784 00:31:07.784 filename0: (groupid=0, jobs=1): err= 0: pid=1300782: Mon Jul 15 14:01:32 2024 00:31:07.784 read: IOPS=505, BW=2022KiB/s (2070kB/s)(19.8MiB/10017msec) 00:31:07.784 slat (nsec): min=5606, max=96800, avg=12226.83, stdev=10483.92 00:31:07.784 clat (usec): min=4552, max=36758, avg=31556.42, stdev=3418.22 00:31:07.784 lat (usec): min=4568, max=36765, avg=31568.64, stdev=3417.16 00:31:07.784 clat percentiles (usec): 00:31:07.784 | 1.00th=[ 7767], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:31:07.784 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:07.784 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:31:07.784 | 99.00th=[33817], 99.50th=[34341], 99.90th=[36963], 99.95th=[36963], 00:31:07.784 | 99.99th=[36963] 00:31:07.784 bw ( KiB/s): min= 1916, max= 2488, per=4.22%, avg=2018.60, stdev=134.01, samples=20 00:31:07.784 iops : min= 479, max= 622, avg=504.65, stdev=33.50, samples=20 00:31:07.784 lat (msec) : 10=1.40%, 20=0.32%, 50=98.28% 00:31:07.784 cpu : usr=97.44%, sys=1.28%, ctx=88, majf=0, minf=9 00:31:07.784 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:07.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.784 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.784 issued rwts: total=5063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.784 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.784 filename0: (groupid=0, jobs=1): err= 0: pid=1300784: Mon Jul 15 14:01:32 2024 00:31:07.784 read: IOPS=495, BW=1982KiB/s (2030kB/s)(19.4MiB/10009msec) 00:31:07.784 slat (nsec): min=5580, max=87973, avg=22689.75, stdev=14751.88 00:31:07.784 clat (usec): min=17490, max=54764, avg=32098.24, stdev=2045.99 00:31:07.784 lat (usec): min=17529, max=54785, avg=32120.93, stdev=2045.76 00:31:07.784 clat percentiles (usec): 00:31:07.784 | 1.00th=[27657], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:07.784 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:07.784 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:31:07.784 | 99.00th=[40109], 99.50th=[44827], 99.90th=[54789], 99.95th=[54789], 00:31:07.784 | 99.99th=[54789] 00:31:07.784 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1973.89, stdev=76.40, samples=19 00:31:07.784 iops : min= 448, max= 512, avg=493.47, stdev=19.10, samples=19 00:31:07.784 lat (msec) : 20=0.46%, 50=99.21%, 100=0.32% 00:31:07.784 cpu : usr=99.14%, sys=0.57%, ctx=12, majf=0, minf=9 00:31:07.784 IO depths : 1=5.5%, 2=11.7%, 4=24.9%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:31:07.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.784 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.784 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.784 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.784 filename0: (groupid=0, jobs=1): err= 0: pid=1300785: Mon Jul 15 14:01:32 2024 00:31:07.784 read: IOPS=494, BW=1979KiB/s (2027kB/s)(19.4MiB/10026msec) 00:31:07.784 slat (nsec): min=5588, max=84667, avg=17959.56, stdev=14325.42 00:31:07.784 clat (usec): min=21909, max=55092, avg=32111.37, stdev=1286.65 00:31:07.784 lat (usec): min=21915, max=55108, avg=32129.33, stdev=1285.08 00:31:07.784 clat percentiles (usec): 00:31:07.784 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:31:07.784 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:07.784 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:31:07.784 | 99.00th=[34341], 99.50th=[41157], 99.90th=[45876], 99.95th=[45876], 00:31:07.784 | 99.99th=[55313] 00:31:07.784 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=1983.55, stdev=65.23, samples=20 00:31:07.784 iops : min= 480, max= 512, avg=495.85, stdev=16.27, samples=20 00:31:07.784 lat (msec) : 50=99.96%, 100=0.04% 00:31:07.784 cpu : usr=99.13%, sys=0.58%, ctx=14, majf=0, minf=9 00:31:07.784 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:07.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.784 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.784 issued rwts: total=4961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.784 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.784 filename0: (groupid=0, jobs=1): err= 0: pid=1300786: Mon Jul 15 14:01:32 2024 00:31:07.784 read: IOPS=498, BW=1993KiB/s (2041kB/s)(19.5MiB/10018msec) 00:31:07.784 slat (nsec): min=5577, max=83656, avg=14112.72, stdev=10788.13 00:31:07.784 clat (usec): min=13183, max=40299, avg=31995.22, stdev=1405.16 00:31:07.784 lat (usec): min=13196, max=40306, avg=32009.33, stdev=1404.94 00:31:07.784 clat percentiles (usec): 00:31:07.784 | 1.00th=[25297], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:07.784 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:07.784 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:31:07.784 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[40109], 00:31:07.784 | 99.99th=[40109] 00:31:07.784 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1990.40, stdev=65.33, samples=20 00:31:07.784 iops : min= 480, max= 512, avg=497.60, stdev=16.33, samples=20 00:31:07.784 lat (msec) : 20=0.32%, 50=99.68% 00:31:07.784 cpu : usr=99.08%, sys=0.61%, ctx=62, majf=0, minf=9 00:31:07.784 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:07.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.784 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.784 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.784 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.784 filename0: (groupid=0, jobs=1): err= 0: pid=1300787: Mon Jul 15 14:01:32 2024 00:31:07.784 read: IOPS=503, BW=2013KiB/s (2061kB/s)(19.7MiB/10005msec) 00:31:07.784 slat (nsec): min=5569, max=70800, avg=13735.66, stdev=9718.56 00:31:07.784 clat (usec): min=12665, max=70669, avg=31681.70, stdev=3853.50 00:31:07.784 lat (usec): min=12677, max=70685, avg=31695.44, stdev=3853.78 00:31:07.784 clat percentiles (usec): 00:31:07.784 | 1.00th=[19268], 5.00th=[23725], 10.00th=[28967], 20.00th=[31589], 00:31:07.784 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:07.784 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[34341], 00:31:07.784 | 99.00th=[45876], 99.50th=[48497], 99.90th=[59507], 99.95th=[60031], 00:31:07.784 | 99.99th=[70779] 00:31:07.784 bw ( KiB/s): min= 1792, max= 2192, per=4.19%, avg=2004.79, stdev=92.68, samples=19 00:31:07.784 iops : min= 448, max= 548, avg=501.16, stdev=23.15, samples=19 00:31:07.784 lat (msec) : 20=1.07%, 50=98.61%, 100=0.32% 00:31:07.784 cpu : usr=95.32%, sys=2.32%, ctx=146, majf=0, minf=9 00:31:07.784 IO depths : 1=4.1%, 2=9.0%, 4=20.3%, 8=57.5%, 16=9.0%, 32=0.0%, >=64=0.0% 00:31:07.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.784 complete : 0=0.0%, 4=92.9%, 8=1.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.784 issued rwts: total=5034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.784 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.784 filename0: (groupid=0, jobs=1): err= 0: pid=1300788: Mon Jul 15 14:01:32 2024 00:31:07.784 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.4MiB/10006msec) 00:31:07.784 slat (nsec): min=5529, max=84131, avg=21448.29, stdev=13872.88 00:31:07.784 clat (usec): min=10717, max=49057, avg=31970.42, stdev=1862.52 00:31:07.784 lat (usec): min=10724, max=49073, avg=31991.86, stdev=1862.91 00:31:07.784 clat percentiles (usec): 00:31:07.784 | 1.00th=[27919], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:07.784 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:07.784 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:31:07.784 | 99.00th=[33817], 99.50th=[34866], 99.90th=[49021], 99.95th=[49021], 00:31:07.784 | 99.99th=[49021] 00:31:07.784 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1980.53, stdev=65.24, samples=19 00:31:07.784 iops : min= 480, max= 512, avg=495.05, stdev=16.31, samples=19 00:31:07.784 lat (msec) : 20=0.68%, 50=99.32% 00:31:07.784 cpu : usr=99.20%, sys=0.48%, ctx=49, majf=0, minf=9 00:31:07.784 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:07.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.784 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.784 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.784 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.784 filename0: (groupid=0, jobs=1): err= 0: pid=1300789: Mon Jul 15 14:01:32 2024 00:31:07.784 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.4MiB/10007msec) 00:31:07.784 slat (nsec): min=5433, max=84534, avg=21158.23, stdev=13678.97 00:31:07.784 clat (usec): min=10819, max=50787, avg=31977.74, stdev=1924.65 00:31:07.784 lat (usec): min=10825, max=50793, avg=31998.89, stdev=1925.00 00:31:07.784 clat percentiles (usec): 00:31:07.784 | 1.00th=[27657], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:07.784 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:07.784 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:31:07.784 | 99.00th=[33817], 99.50th=[34866], 99.90th=[50594], 99.95th=[50594], 00:31:07.784 | 99.99th=[50594] 00:31:07.784 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1980.37, stdev=63.85, samples=19 00:31:07.784 iops : min= 480, max= 512, avg=495.05, stdev=15.92, samples=19 00:31:07.784 lat (msec) : 20=0.64%, 50=99.00%, 100=0.36% 00:31:07.785 cpu : usr=97.23%, sys=1.40%, ctx=114, majf=0, minf=9 00:31:07.785 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:31:07.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.785 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.785 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.785 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.785 filename0: (groupid=0, jobs=1): err= 0: pid=1300790: Mon Jul 15 14:01:32 2024 00:31:07.785 read: IOPS=503, BW=2014KiB/s (2063kB/s)(19.7MiB/10005msec) 00:31:07.785 slat (nsec): min=5568, max=79655, avg=13336.67, stdev=11076.48 00:31:07.785 clat (usec): min=7339, max=75941, avg=31706.82, stdev=4954.94 00:31:07.785 lat (usec): min=7345, max=75959, avg=31720.15, stdev=4954.88 00:31:07.785 clat percentiles (usec): 00:31:07.785 | 1.00th=[19792], 5.00th=[24773], 10.00th=[25822], 20.00th=[27919], 00:31:07.785 | 30.00th=[31589], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:07.785 | 70.00th=[32375], 80.00th=[33162], 90.00th=[37487], 95.00th=[39060], 00:31:07.785 | 99.00th=[50070], 99.50th=[51119], 99.90th=[60031], 99.95th=[60031], 00:31:07.785 | 99.99th=[76022] 00:31:07.785 bw ( KiB/s): min= 1808, max= 2155, per=4.20%, avg=2005.63, stdev=73.95, samples=19 00:31:07.785 iops : min= 452, max= 538, avg=501.37, stdev=18.40, samples=19 00:31:07.785 lat (msec) : 10=0.08%, 20=1.05%, 50=98.02%, 100=0.85% 00:31:07.785 cpu : usr=99.17%, sys=0.54%, ctx=9, majf=0, minf=9 00:31:07.785 IO depths : 1=0.1%, 2=0.9%, 4=5.3%, 8=78.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:31:07.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.785 complete : 0=0.0%, 4=89.6%, 8=8.0%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.785 issued rwts: total=5038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.785 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.785 filename1: (groupid=0, jobs=1): err= 0: pid=1300791: Mon Jul 15 14:01:32 2024 00:31:07.785 read: IOPS=494, BW=1980KiB/s (2027kB/s)(19.4MiB/10022msec) 00:31:07.785 slat (nsec): min=5606, max=74562, avg=17169.94, stdev=13598.59 00:31:07.785 clat (usec): min=22177, max=76258, avg=32175.20, stdev=2283.60 00:31:07.785 lat (usec): min=22185, max=76279, avg=32192.37, stdev=2282.91 00:31:07.785 clat percentiles (usec): 00:31:07.785 | 1.00th=[30802], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:31:07.785 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:07.785 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:31:07.785 | 99.00th=[36439], 99.50th=[41157], 99.90th=[66847], 99.95th=[66847], 00:31:07.785 | 99.99th=[76022] 00:31:07.785 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1973.89, stdev=77.69, samples=19 00:31:07.785 iops : min= 448, max= 512, avg=493.47, stdev=19.42, samples=19 00:31:07.785 lat (msec) : 50=99.68%, 100=0.32% 00:31:07.785 cpu : usr=96.92%, sys=1.72%, ctx=107, majf=0, minf=9 00:31:07.785 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:07.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.785 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.785 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.785 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.785 filename1: (groupid=0, jobs=1): err= 0: pid=1300792: Mon Jul 15 14:01:32 2024 00:31:07.785 read: IOPS=495, BW=1982KiB/s (2029kB/s)(19.4MiB/10012msec) 00:31:07.785 slat (nsec): min=5636, max=69635, avg=13073.13, stdev=8026.73 00:31:07.785 clat (usec): min=23209, max=56344, avg=32184.67, stdev=1508.35 00:31:07.785 lat (usec): min=23215, max=56364, avg=32197.74, stdev=1508.25 00:31:07.785 clat percentiles (usec): 00:31:07.785 | 1.00th=[30802], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:31:07.785 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:07.785 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:31:07.785 | 99.00th=[34341], 99.50th=[34341], 99.90th=[56361], 99.95th=[56361], 00:31:07.785 | 99.99th=[56361] 00:31:07.785 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1980.63, stdev=78.31, samples=19 00:31:07.785 iops : min= 448, max= 512, avg=495.16, stdev=19.58, samples=19 00:31:07.785 lat (msec) : 50=99.68%, 100=0.32% 00:31:07.785 cpu : usr=97.24%, sys=1.42%, ctx=107, majf=0, minf=9 00:31:07.785 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:07.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.785 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.785 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.785 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.785 filename1: (groupid=0, jobs=1): err= 0: pid=1300793: Mon Jul 15 14:01:32 2024 00:31:07.785 read: IOPS=508, BW=2032KiB/s (2081kB/s)(19.9MiB/10014msec) 00:31:07.785 slat (nsec): min=5575, max=92697, avg=9300.11, stdev=5415.99 00:31:07.785 clat (usec): min=4053, max=43865, avg=31409.71, stdev=3486.53 00:31:07.785 lat (usec): min=4065, max=43890, avg=31419.01, stdev=3485.90 00:31:07.785 clat percentiles (usec): 00:31:07.785 | 1.00th=[ 8848], 5.00th=[23987], 10.00th=[31589], 20.00th=[31851], 00:31:07.785 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:07.785 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:31:07.785 | 99.00th=[33817], 99.50th=[34341], 99.90th=[36439], 99.95th=[36439], 00:31:07.785 | 99.99th=[43779] 00:31:07.785 bw ( KiB/s): min= 1920, max= 2432, per=4.24%, avg=2028.80, stdev=133.12, samples=20 00:31:07.785 iops : min= 480, max= 608, avg=507.20, stdev=33.28, samples=20 00:31:07.785 lat (msec) : 10=1.22%, 20=1.00%, 50=97.78% 00:31:07.785 cpu : usr=99.01%, sys=0.68%, ctx=43, majf=0, minf=9 00:31:07.785 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:07.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.785 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.785 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.785 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.785 filename1: (groupid=0, jobs=1): err= 0: pid=1300795: Mon Jul 15 14:01:32 2024 00:31:07.785 read: IOPS=501, BW=2007KiB/s (2056kB/s)(19.6MiB/10011msec) 00:31:07.785 slat (usec): min=5, max=121, avg= 8.49, stdev= 4.81 00:31:07.785 clat (usec): min=5075, max=44608, avg=31779.44, stdev=3028.58 00:31:07.785 lat (usec): min=5094, max=44614, avg=31787.93, stdev=3026.97 00:31:07.785 clat percentiles (usec): 00:31:07.785 | 1.00th=[14877], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:07.785 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:07.785 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:31:07.785 | 99.00th=[34341], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:31:07.785 | 99.99th=[44827] 00:31:07.785 bw ( KiB/s): min= 1916, max= 2304, per=4.20%, avg=2008.60, stdev=93.71, samples=20 00:31:07.785 iops : min= 479, max= 576, avg=502.15, stdev=23.43, samples=20 00:31:07.785 lat (msec) : 10=0.90%, 20=0.74%, 50=98.37% 00:31:07.785 cpu : usr=96.97%, sys=1.68%, ctx=52, majf=0, minf=9 00:31:07.785 IO depths : 1=5.8%, 2=11.9%, 4=24.9%, 8=50.7%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:07.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.785 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.785 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.785 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.785 filename1: (groupid=0, jobs=1): err= 0: pid=1300796: Mon Jul 15 14:01:32 2024 00:31:07.785 read: IOPS=495, BW=1983KiB/s (2031kB/s)(19.4MiB/10005msec) 00:31:07.785 slat (nsec): min=5621, max=69127, avg=13103.93, stdev=8508.95 00:31:07.785 clat (usec): min=18534, max=56894, avg=32158.82, stdev=1772.79 00:31:07.785 lat (usec): min=18541, max=56924, avg=32171.93, stdev=1772.73 00:31:07.785 clat percentiles (usec): 00:31:07.785 | 1.00th=[30802], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:31:07.785 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:07.785 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:31:07.785 | 99.00th=[34341], 99.50th=[34341], 99.90th=[56886], 99.95th=[56886], 00:31:07.785 | 99.99th=[56886] 00:31:07.785 bw ( KiB/s): min= 1795, max= 2048, per=4.14%, avg=1980.79, stdev=76.62, samples=19 00:31:07.785 iops : min= 448, max= 512, avg=495.16, stdev=19.26, samples=19 00:31:07.785 lat (msec) : 20=0.38%, 50=99.29%, 100=0.32% 00:31:07.785 cpu : usr=97.76%, sys=1.30%, ctx=118, majf=0, minf=9 00:31:07.785 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:07.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.785 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.785 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.785 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.785 filename1: (groupid=0, jobs=1): err= 0: pid=1300797: Mon Jul 15 14:01:32 2024 00:31:07.785 read: IOPS=495, BW=1982KiB/s (2029kB/s)(19.4MiB/10005msec) 00:31:07.785 slat (nsec): min=5434, max=85619, avg=20066.43, stdev=13174.81 00:31:07.785 clat (usec): min=9661, max=49214, avg=32100.92, stdev=2486.93 00:31:07.785 lat (usec): min=9666, max=49230, avg=32120.98, stdev=2486.71 00:31:07.785 clat percentiles (usec): 00:31:07.785 | 1.00th=[25297], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:07.785 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:07.785 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:31:07.785 | 99.00th=[42730], 99.50th=[46924], 99.90th=[49021], 99.95th=[49021], 00:31:07.785 | 99.99th=[49021] 00:31:07.785 bw ( KiB/s): min= 1888, max= 2048, per=4.13%, avg=1972.11, stdev=66.36, samples=19 00:31:07.785 iops : min= 472, max= 512, avg=492.95, stdev=16.58, samples=19 00:31:07.785 lat (msec) : 10=0.14%, 20=0.52%, 50=99.33% 00:31:07.785 cpu : usr=97.40%, sys=1.37%, ctx=70, majf=0, minf=9 00:31:07.785 IO depths : 1=5.8%, 2=11.7%, 4=23.8%, 8=51.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:07.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.785 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.785 issued rwts: total=4957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.785 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.785 filename1: (groupid=0, jobs=1): err= 0: pid=1300798: Mon Jul 15 14:01:32 2024 00:31:07.785 read: IOPS=495, BW=1982KiB/s (2029kB/s)(19.4MiB/10012msec) 00:31:07.785 slat (nsec): min=5606, max=85081, avg=19979.21, stdev=14627.11 00:31:07.785 clat (usec): min=21697, max=66123, avg=32111.40, stdev=1723.24 00:31:07.785 lat (usec): min=21713, max=66158, avg=32131.38, stdev=1722.78 00:31:07.785 clat percentiles (usec): 00:31:07.785 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:07.785 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:07.785 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:31:07.785 | 99.00th=[34341], 99.50th=[38536], 99.90th=[56886], 99.95th=[56886], 00:31:07.785 | 99.99th=[66323] 00:31:07.785 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1980.63, stdev=78.31, samples=19 00:31:07.785 iops : min= 448, max= 512, avg=495.16, stdev=19.58, samples=19 00:31:07.785 lat (msec) : 50=99.68%, 100=0.32% 00:31:07.785 cpu : usr=98.86%, sys=0.73%, ctx=153, majf=0, minf=9 00:31:07.785 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:07.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.786 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.786 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.786 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.786 filename1: (groupid=0, jobs=1): err= 0: pid=1300799: Mon Jul 15 14:01:32 2024 00:31:07.786 read: IOPS=499, BW=1997KiB/s (2045kB/s)(19.5MiB/10005msec) 00:31:07.786 slat (nsec): min=5607, max=85184, avg=17150.60, stdev=12464.51 00:31:07.786 clat (usec): min=12592, max=60390, avg=31903.06, stdev=3204.48 00:31:07.786 lat (usec): min=12599, max=60407, avg=31920.21, stdev=3204.46 00:31:07.786 clat percentiles (usec): 00:31:07.786 | 1.00th=[22414], 5.00th=[26084], 10.00th=[30802], 20.00th=[31589], 00:31:07.786 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:07.786 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[35914], 00:31:07.786 | 99.00th=[41157], 99.50th=[43254], 99.90th=[60556], 99.95th=[60556], 00:31:07.786 | 99.99th=[60556] 00:31:07.786 bw ( KiB/s): min= 1792, max= 2080, per=4.16%, avg=1988.84, stdev=72.77, samples=19 00:31:07.786 iops : min= 448, max= 520, avg=497.21, stdev=18.19, samples=19 00:31:07.786 lat (msec) : 20=0.52%, 50=99.16%, 100=0.32% 00:31:07.786 cpu : usr=99.16%, sys=0.53%, ctx=13, majf=0, minf=9 00:31:07.786 IO depths : 1=4.1%, 2=8.3%, 4=17.6%, 8=60.4%, 16=9.6%, 32=0.0%, >=64=0.0% 00:31:07.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.786 complete : 0=0.0%, 4=92.3%, 8=3.2%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.786 issued rwts: total=4996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.786 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.786 filename2: (groupid=0, jobs=1): err= 0: pid=1300800: Mon Jul 15 14:01:32 2024 00:31:07.786 read: IOPS=495, BW=1982KiB/s (2029kB/s)(19.4MiB/10012msec) 00:31:07.786 slat (nsec): min=5582, max=80803, avg=21102.94, stdev=14390.05 00:31:07.786 clat (usec): min=10994, max=48484, avg=32112.96, stdev=2987.10 00:31:07.786 lat (usec): min=11014, max=48494, avg=32134.07, stdev=2986.99 00:31:07.786 clat percentiles (usec): 00:31:07.786 | 1.00th=[19268], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:07.786 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:07.786 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:31:07.786 | 99.00th=[45351], 99.50th=[46400], 99.90th=[47973], 99.95th=[47973], 00:31:07.786 | 99.99th=[48497] 00:31:07.786 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1973.89, stdev=61.56, samples=19 00:31:07.786 iops : min= 480, max= 512, avg=493.47, stdev=15.39, samples=19 00:31:07.786 lat (msec) : 20=1.23%, 50=98.77% 00:31:07.786 cpu : usr=99.10%, sys=0.57%, ctx=13, majf=0, minf=9 00:31:07.786 IO depths : 1=4.4%, 2=8.8%, 4=19.3%, 8=58.3%, 16=9.4%, 32=0.0%, >=64=0.0% 00:31:07.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.786 complete : 0=0.0%, 4=92.9%, 8=2.5%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.786 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.786 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.786 filename2: (groupid=0, jobs=1): err= 0: pid=1300801: Mon Jul 15 14:01:32 2024 00:31:07.786 read: IOPS=496, BW=1985KiB/s (2033kB/s)(19.4MiB/10005msec) 00:31:07.786 slat (nsec): min=5647, max=85859, avg=20874.65, stdev=13780.34 00:31:07.786 clat (usec): min=11174, max=60615, avg=32038.53, stdev=2433.98 00:31:07.786 lat (usec): min=11180, max=60631, avg=32059.41, stdev=2433.80 00:31:07.786 clat percentiles (usec): 00:31:07.786 | 1.00th=[23725], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:07.786 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:07.786 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:31:07.786 | 99.00th=[39584], 99.50th=[45351], 99.90th=[60556], 99.95th=[60556], 00:31:07.786 | 99.99th=[60556] 00:31:07.786 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1976.21, stdev=76.79, samples=19 00:31:07.786 iops : min= 448, max= 512, avg=494.05, stdev=19.20, samples=19 00:31:07.786 lat (msec) : 20=0.40%, 50=99.28%, 100=0.32% 00:31:07.786 cpu : usr=97.02%, sys=1.62%, ctx=103, majf=0, minf=9 00:31:07.786 IO depths : 1=5.8%, 2=12.0%, 4=24.8%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:07.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.786 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.786 issued rwts: total=4966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.786 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.786 filename2: (groupid=0, jobs=1): err= 0: pid=1300802: Mon Jul 15 14:01:32 2024 00:31:07.786 read: IOPS=504, BW=2020KiB/s (2068kB/s)(19.8MiB/10014msec) 00:31:07.786 slat (nsec): min=5584, max=93350, avg=14696.16, stdev=11526.47 00:31:07.786 clat (usec): min=4171, max=34436, avg=31570.52, stdev=3502.36 00:31:07.786 lat (usec): min=4187, max=34443, avg=31585.22, stdev=3501.46 00:31:07.786 clat percentiles (usec): 00:31:07.786 | 1.00th=[ 5473], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:07.786 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:07.786 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:31:07.786 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:31:07.786 | 99.99th=[34341] 00:31:07.786 bw ( KiB/s): min= 1920, max= 2560, per=4.22%, avg=2016.00, stdev=143.11, samples=20 00:31:07.786 iops : min= 480, max= 640, avg=504.00, stdev=35.78, samples=20 00:31:07.786 lat (msec) : 10=1.54%, 20=0.36%, 50=98.10% 00:31:07.786 cpu : usr=99.01%, sys=0.62%, ctx=61, majf=0, minf=10 00:31:07.786 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:07.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.786 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.786 issued rwts: total=5056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.786 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.786 filename2: (groupid=0, jobs=1): err= 0: pid=1300803: Mon Jul 15 14:01:32 2024 00:31:07.786 read: IOPS=496, BW=1988KiB/s (2035kB/s)(19.4MiB/10014msec) 00:31:07.786 slat (nsec): min=5608, max=90705, avg=15958.62, stdev=10452.87 00:31:07.786 clat (usec): min=20933, max=43556, avg=32057.10, stdev=1144.44 00:31:07.786 lat (usec): min=20939, max=43577, avg=32073.06, stdev=1144.41 00:31:07.786 clat percentiles (usec): 00:31:07.786 | 1.00th=[30540], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:31:07.786 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:07.786 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:31:07.786 | 99.00th=[34341], 99.50th=[34341], 99.90th=[43779], 99.95th=[43779], 00:31:07.786 | 99.99th=[43779] 00:31:07.786 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=1984.00, stdev=65.66, samples=20 00:31:07.786 iops : min= 480, max= 512, avg=496.00, stdev=16.42, samples=20 00:31:07.786 lat (msec) : 50=100.00% 00:31:07.786 cpu : usr=98.98%, sys=0.71%, ctx=38, majf=0, minf=9 00:31:07.786 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:07.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.786 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.786 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.786 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.786 filename2: (groupid=0, jobs=1): err= 0: pid=1300804: Mon Jul 15 14:01:32 2024 00:31:07.786 read: IOPS=504, BW=2020KiB/s (2068kB/s)(19.8MiB/10018msec) 00:31:07.786 slat (nsec): min=5577, max=79263, avg=16603.74, stdev=12923.36 00:31:07.786 clat (usec): min=15453, max=52074, avg=31551.41, stdev=2574.19 00:31:07.786 lat (usec): min=15459, max=52082, avg=31568.01, stdev=2575.65 00:31:07.786 clat percentiles (usec): 00:31:07.786 | 1.00th=[20579], 5.00th=[25822], 10.00th=[31327], 20.00th=[31589], 00:31:07.786 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:07.786 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:31:07.786 | 99.00th=[34341], 99.50th=[35390], 99.90th=[52167], 99.95th=[52167], 00:31:07.786 | 99.99th=[52167] 00:31:07.786 bw ( KiB/s): min= 1920, max= 2320, per=4.22%, avg=2016.80, stdev=118.66, samples=20 00:31:07.786 iops : min= 480, max= 580, avg=504.20, stdev=29.66, samples=20 00:31:07.786 lat (msec) : 20=0.59%, 50=99.29%, 100=0.12% 00:31:07.786 cpu : usr=98.88%, sys=0.77%, ctx=29, majf=0, minf=9 00:31:07.786 IO depths : 1=5.8%, 2=11.7%, 4=23.9%, 8=51.8%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:07.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.786 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.786 issued rwts: total=5058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.786 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.786 filename2: (groupid=0, jobs=1): err= 0: pid=1300805: Mon Jul 15 14:01:32 2024 00:31:07.786 read: IOPS=495, BW=1983KiB/s (2031kB/s)(19.4MiB/10005msec) 00:31:07.786 slat (nsec): min=5584, max=78362, avg=21287.08, stdev=14002.44 00:31:07.786 clat (usec): min=27736, max=46115, avg=32084.92, stdev=970.08 00:31:07.786 lat (usec): min=27750, max=46137, avg=32106.21, stdev=969.08 00:31:07.786 clat percentiles (usec): 00:31:07.786 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:07.786 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:07.786 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:31:07.786 | 99.00th=[34341], 99.50th=[34866], 99.90th=[45876], 99.95th=[45876], 00:31:07.786 | 99.99th=[45876] 00:31:07.786 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1980.79, stdev=65.51, samples=19 00:31:07.786 iops : min= 480, max= 512, avg=495.16, stdev=16.42, samples=19 00:31:07.786 lat (msec) : 50=100.00% 00:31:07.786 cpu : usr=99.05%, sys=0.65%, ctx=12, majf=0, minf=9 00:31:07.786 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:07.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.786 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.786 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.786 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.786 filename2: (groupid=0, jobs=1): err= 0: pid=1300806: Mon Jul 15 14:01:32 2024 00:31:07.786 read: IOPS=495, BW=1983KiB/s (2030kB/s)(19.4MiB/10006msec) 00:31:07.786 slat (nsec): min=5668, max=85793, avg=22970.58, stdev=15705.96 00:31:07.786 clat (usec): min=22973, max=53198, avg=32055.19, stdev=1470.94 00:31:07.786 lat (usec): min=22979, max=53220, avg=32078.16, stdev=1470.30 00:31:07.786 clat percentiles (usec): 00:31:07.786 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:07.786 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:07.786 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:31:07.786 | 99.00th=[34341], 99.50th=[36439], 99.90th=[53216], 99.95th=[53216], 00:31:07.786 | 99.99th=[53216] 00:31:07.786 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1980.63, stdev=75.53, samples=19 00:31:07.786 iops : min= 448, max= 512, avg=495.16, stdev=18.88, samples=19 00:31:07.786 lat (msec) : 50=99.68%, 100=0.32% 00:31:07.786 cpu : usr=96.65%, sys=1.75%, ctx=216, majf=0, minf=9 00:31:07.786 IO depths : 1=6.0%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:07.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.786 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.786 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.787 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.787 filename2: (groupid=0, jobs=1): err= 0: pid=1300808: Mon Jul 15 14:01:32 2024 00:31:07.787 read: IOPS=490, BW=1961KiB/s (2009kB/s)(19.2MiB/10023msec) 00:31:07.787 slat (nsec): min=5568, max=84336, avg=17531.81, stdev=13172.98 00:31:07.787 clat (usec): min=15227, max=57916, avg=32486.77, stdev=4634.81 00:31:07.787 lat (usec): min=15238, max=57936, avg=32504.30, stdev=4634.83 00:31:07.787 clat percentiles (usec): 00:31:07.787 | 1.00th=[19268], 5.00th=[25297], 10.00th=[29230], 20.00th=[31589], 00:31:07.787 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:07.787 | 70.00th=[32375], 80.00th=[32637], 90.00th=[36439], 95.00th=[41681], 00:31:07.787 | 99.00th=[50594], 99.50th=[52691], 99.90th=[57934], 99.95th=[57934], 00:31:07.787 | 99.99th=[57934] 00:31:07.787 bw ( KiB/s): min= 1840, max= 2048, per=4.11%, avg=1962.00, stdev=60.70, samples=20 00:31:07.787 iops : min= 460, max= 512, avg=490.50, stdev=15.17, samples=20 00:31:07.787 lat (msec) : 20=1.14%, 50=97.78%, 100=1.08% 00:31:07.787 cpu : usr=98.81%, sys=0.84%, ctx=50, majf=0, minf=9 00:31:07.787 IO depths : 1=1.6%, 2=4.3%, 4=14.5%, 8=66.9%, 16=12.7%, 32=0.0%, >=64=0.0% 00:31:07.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.787 complete : 0=0.0%, 4=92.3%, 8=3.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.787 issued rwts: total=4915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.787 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:07.787 00:31:07.787 Run status group 0 (all jobs): 00:31:07.787 READ: bw=46.7MiB/s (48.9MB/s), 1961KiB/s-2032KiB/s (2009kB/s-2081kB/s), io=468MiB (491MB), run=10005-10026msec 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.787 14:01:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.787 bdev_null0 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.787 [2024-07-15 14:01:33.061392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.787 bdev_null1 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:07.787 { 00:31:07.787 "params": { 00:31:07.787 "name": "Nvme$subsystem", 00:31:07.787 "trtype": "$TEST_TRANSPORT", 00:31:07.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:07.787 "adrfam": "ipv4", 00:31:07.787 "trsvcid": "$NVMF_PORT", 00:31:07.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:07.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:07.787 "hdgst": ${hdgst:-false}, 00:31:07.787 "ddgst": ${ddgst:-false} 00:31:07.787 }, 00:31:07.787 "method": "bdev_nvme_attach_controller" 00:31:07.787 } 00:31:07.787 EOF 00:31:07.787 )") 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:07.787 14:01:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:07.788 { 00:31:07.788 "params": { 00:31:07.788 "name": "Nvme$subsystem", 00:31:07.788 "trtype": "$TEST_TRANSPORT", 00:31:07.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:07.788 "adrfam": "ipv4", 00:31:07.788 "trsvcid": "$NVMF_PORT", 00:31:07.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:07.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:07.788 "hdgst": ${hdgst:-false}, 00:31:07.788 "ddgst": ${ddgst:-false} 00:31:07.788 }, 00:31:07.788 "method": "bdev_nvme_attach_controller" 00:31:07.788 } 00:31:07.788 EOF 00:31:07.788 )") 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:07.788 "params": { 00:31:07.788 "name": "Nvme0", 00:31:07.788 "trtype": "tcp", 00:31:07.788 "traddr": "10.0.0.2", 00:31:07.788 "adrfam": "ipv4", 00:31:07.788 "trsvcid": "4420", 00:31:07.788 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:07.788 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:07.788 "hdgst": false, 00:31:07.788 "ddgst": false 00:31:07.788 }, 00:31:07.788 "method": "bdev_nvme_attach_controller" 00:31:07.788 },{ 00:31:07.788 "params": { 00:31:07.788 "name": "Nvme1", 00:31:07.788 "trtype": "tcp", 00:31:07.788 "traddr": "10.0.0.2", 00:31:07.788 "adrfam": "ipv4", 00:31:07.788 "trsvcid": "4420", 00:31:07.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:07.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:07.788 "hdgst": false, 00:31:07.788 "ddgst": false 00:31:07.788 }, 00:31:07.788 "method": "bdev_nvme_attach_controller" 00:31:07.788 }' 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:07.788 14:01:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:07.788 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:07.788 ... 00:31:07.788 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:07.788 ... 00:31:07.788 fio-3.35 00:31:07.788 Starting 4 threads 00:31:07.788 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.076 00:31:13.076 filename0: (groupid=0, jobs=1): err= 0: pid=1303196: Mon Jul 15 14:01:39 2024 00:31:13.076 read: IOPS=2067, BW=16.2MiB/s (16.9MB/s)(80.8MiB/5004msec) 00:31:13.076 slat (nsec): min=5406, max=57720, avg=6230.64, stdev=2087.08 00:31:13.076 clat (usec): min=1959, max=6354, avg=3851.89, stdev=623.19 00:31:13.076 lat (usec): min=1983, max=6362, avg=3858.12, stdev=623.13 00:31:13.076 clat percentiles (usec): 00:31:13.076 | 1.00th=[ 2606], 5.00th=[ 2966], 10.00th=[ 3163], 20.00th=[ 3392], 00:31:13.076 | 30.00th=[ 3490], 40.00th=[ 3687], 50.00th=[ 3785], 60.00th=[ 3818], 00:31:13.076 | 70.00th=[ 4080], 80.00th=[ 4359], 90.00th=[ 4686], 95.00th=[ 5014], 00:31:13.076 | 99.00th=[ 5604], 99.50th=[ 5800], 99.90th=[ 6128], 99.95th=[ 6325], 00:31:13.076 | 99.99th=[ 6325] 00:31:13.076 bw ( KiB/s): min=16288, max=16704, per=25.06%, avg=16539.20, stdev=126.22, samples=10 00:31:13.076 iops : min= 2036, max= 2088, avg=2067.40, stdev=15.78, samples=10 00:31:13.076 lat (msec) : 2=0.02%, 4=66.24%, 10=33.74% 00:31:13.076 cpu : usr=95.70%, sys=3.46%, ctx=279, majf=0, minf=0 00:31:13.076 IO depths : 1=0.3%, 2=1.2%, 4=70.7%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:13.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.077 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.077 issued rwts: total=10345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.077 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:13.077 filename0: (groupid=0, jobs=1): err= 0: pid=1303197: Mon Jul 15 14:01:39 2024 00:31:13.077 read: IOPS=2220, BW=17.3MiB/s (18.2MB/s)(86.8MiB/5002msec) 00:31:13.077 slat (nsec): min=5396, max=33158, avg=6034.82, stdev=1651.77 00:31:13.077 clat (usec): min=1402, max=6303, avg=3586.59, stdev=606.47 00:31:13.077 lat (usec): min=1407, max=6308, avg=3592.62, stdev=606.46 00:31:13.077 clat percentiles (usec): 00:31:13.077 | 1.00th=[ 2311], 5.00th=[ 2671], 10.00th=[ 2868], 20.00th=[ 3130], 00:31:13.077 | 30.00th=[ 3294], 40.00th=[ 3425], 50.00th=[ 3556], 60.00th=[ 3687], 00:31:13.077 | 70.00th=[ 3785], 80.00th=[ 3949], 90.00th=[ 4424], 95.00th=[ 4752], 00:31:13.077 | 99.00th=[ 5407], 99.50th=[ 5538], 99.90th=[ 5800], 99.95th=[ 6063], 00:31:13.077 | 99.99th=[ 6259] 00:31:13.077 bw ( KiB/s): min=17520, max=18064, per=26.90%, avg=17755.20, stdev=156.59, samples=10 00:31:13.077 iops : min= 2190, max= 2258, avg=2219.40, stdev=19.57, samples=10 00:31:13.077 lat (msec) : 2=0.31%, 4=81.58%, 10=18.11% 00:31:13.077 cpu : usr=97.00%, sys=2.74%, ctx=12, majf=0, minf=9 00:31:13.077 IO depths : 1=0.4%, 2=2.9%, 4=68.1%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:13.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.077 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.077 issued rwts: total=11105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.077 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:13.077 filename1: (groupid=0, jobs=1): err= 0: pid=1303198: Mon Jul 15 14:01:39 2024 00:31:13.077 read: IOPS=2037, BW=15.9MiB/s (16.7MB/s)(79.6MiB/5002msec) 00:31:13.077 slat (nsec): min=5399, max=65235, avg=6087.16, stdev=2000.22 00:31:13.077 clat (usec): min=2074, max=6565, avg=3909.26, stdev=639.05 00:31:13.077 lat (usec): min=2079, max=6570, avg=3915.35, stdev=639.05 00:31:13.077 clat percentiles (usec): 00:31:13.077 | 1.00th=[ 2671], 5.00th=[ 3032], 10.00th=[ 3195], 20.00th=[ 3425], 00:31:13.077 | 30.00th=[ 3523], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3916], 00:31:13.077 | 70.00th=[ 4178], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5080], 00:31:13.077 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 6259], 99.95th=[ 6259], 00:31:13.077 | 99.99th=[ 6521] 00:31:13.077 bw ( KiB/s): min=16192, max=16560, per=24.68%, avg=16291.56, stdev=110.79, samples=9 00:31:13.077 iops : min= 2024, max= 2070, avg=2036.44, stdev=13.85, samples=9 00:31:13.077 lat (msec) : 4=62.86%, 10=37.14% 00:31:13.077 cpu : usr=96.90%, sys=2.86%, ctx=6, majf=0, minf=9 00:31:13.077 IO depths : 1=0.3%, 2=1.3%, 4=70.5%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:13.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.077 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.077 issued rwts: total=10191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.077 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:13.077 filename1: (groupid=0, jobs=1): err= 0: pid=1303199: Mon Jul 15 14:01:39 2024 00:31:13.077 read: IOPS=1975, BW=15.4MiB/s (16.2MB/s)(77.8MiB/5042msec) 00:31:13.077 slat (nsec): min=7886, max=66210, avg=8575.31, stdev=1599.02 00:31:13.077 clat (usec): min=1859, max=42451, avg=4005.35, stdev=956.04 00:31:13.077 lat (usec): min=1868, max=42460, avg=4013.92, stdev=956.05 00:31:13.077 clat percentiles (usec): 00:31:13.077 | 1.00th=[ 2704], 5.00th=[ 3032], 10.00th=[ 3228], 20.00th=[ 3458], 00:31:13.077 | 30.00th=[ 3621], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 4047], 00:31:13.077 | 70.00th=[ 4293], 80.00th=[ 4555], 90.00th=[ 4948], 95.00th=[ 5276], 00:31:13.077 | 99.00th=[ 5866], 99.50th=[ 6128], 99.90th=[ 7504], 99.95th=[ 7570], 00:31:13.077 | 99.99th=[42206] 00:31:13.077 bw ( KiB/s): min=15647, max=16256, per=24.13%, avg=15927.90, stdev=192.05, samples=10 00:31:13.077 iops : min= 1955, max= 2032, avg=1990.90, stdev=24.15, samples=10 00:31:13.077 lat (msec) : 2=0.09%, 4=58.34%, 10=41.54%, 50=0.03% 00:31:13.077 cpu : usr=96.81%, sys=2.88%, ctx=42, majf=0, minf=9 00:31:13.077 IO depths : 1=0.5%, 2=1.8%, 4=69.9%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:13.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.077 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.077 issued rwts: total=9961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.077 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:13.077 00:31:13.077 Run status group 0 (all jobs): 00:31:13.077 READ: bw=64.5MiB/s (67.6MB/s), 15.4MiB/s-17.3MiB/s (16.2MB/s-18.2MB/s), io=325MiB (341MB), run=5002-5042msec 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.077 00:31:13.077 real 0m24.750s 00:31:13.077 user 5m13.025s 00:31:13.077 sys 0m4.621s 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:13.077 14:01:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.077 ************************************ 00:31:13.077 END TEST fio_dif_rand_params 00:31:13.077 ************************************ 00:31:13.077 14:01:39 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:13.077 14:01:39 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:13.077 14:01:39 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:13.077 14:01:39 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:13.077 14:01:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:13.077 ************************************ 00:31:13.077 START TEST fio_dif_digest 00:31:13.077 ************************************ 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:13.077 bdev_null0 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.077 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:13.338 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.338 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:13.338 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.338 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:13.338 [2024-07-15 14:01:39.609845] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:13.338 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.338 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:13.338 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:13.338 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:13.339 { 00:31:13.339 "params": { 00:31:13.339 "name": "Nvme$subsystem", 00:31:13.339 "trtype": "$TEST_TRANSPORT", 00:31:13.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:13.339 "adrfam": "ipv4", 00:31:13.339 "trsvcid": "$NVMF_PORT", 00:31:13.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:13.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:13.339 "hdgst": ${hdgst:-false}, 00:31:13.339 "ddgst": ${ddgst:-false} 00:31:13.339 }, 00:31:13.339 "method": "bdev_nvme_attach_controller" 00:31:13.339 } 00:31:13.339 EOF 00:31:13.339 )") 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:13.339 "params": { 00:31:13.339 "name": "Nvme0", 00:31:13.339 "trtype": "tcp", 00:31:13.339 "traddr": "10.0.0.2", 00:31:13.339 "adrfam": "ipv4", 00:31:13.339 "trsvcid": "4420", 00:31:13.339 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:13.339 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:13.339 "hdgst": true, 00:31:13.339 "ddgst": true 00:31:13.339 }, 00:31:13.339 "method": "bdev_nvme_attach_controller" 00:31:13.339 }' 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:13.339 14:01:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:13.600 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:13.600 ... 00:31:13.600 fio-3.35 00:31:13.600 Starting 3 threads 00:31:13.600 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.853 00:31:25.853 filename0: (groupid=0, jobs=1): err= 0: pid=1304586: Mon Jul 15 14:01:50 2024 00:31:25.853 read: IOPS=152, BW=19.0MiB/s (19.9MB/s)(190MiB/10007msec) 00:31:25.853 slat (nsec): min=5762, max=58147, avg=7541.81, stdev=2307.67 00:31:25.853 clat (usec): min=7703, max=97571, avg=19700.27, stdev=15053.76 00:31:25.853 lat (usec): min=7709, max=97578, avg=19707.81, stdev=15053.68 00:31:25.853 clat percentiles (usec): 00:31:25.853 | 1.00th=[ 8979], 5.00th=[10552], 10.00th=[11338], 20.00th=[12649], 00:31:25.853 | 30.00th=[13435], 40.00th=[14222], 50.00th=[14877], 60.00th=[15401], 00:31:25.853 | 70.00th=[16057], 80.00th=[16909], 90.00th=[53216], 95.00th=[55837], 00:31:25.853 | 99.00th=[58983], 99.50th=[95945], 99.90th=[96994], 99.95th=[98042], 00:31:25.853 | 99.99th=[98042] 00:31:25.853 bw ( KiB/s): min=12288, max=25600, per=26.62%, avg=19468.80, stdev=3713.02, samples=20 00:31:25.853 iops : min= 96, max= 200, avg=152.10, stdev=29.01, samples=20 00:31:25.853 lat (msec) : 10=3.02%, 20=84.18%, 100=12.80% 00:31:25.853 cpu : usr=96.01%, sys=3.73%, ctx=30, majf=0, minf=181 00:31:25.853 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:25.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.853 issued rwts: total=1523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.853 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:25.853 filename0: (groupid=0, jobs=1): err= 0: pid=1304587: Mon Jul 15 14:01:50 2024 00:31:25.853 read: IOPS=247, BW=30.9MiB/s (32.4MB/s)(309MiB/10005msec) 00:31:25.853 slat (nsec): min=5767, max=37904, avg=7194.83, stdev=1536.96 00:31:25.853 clat (usec): min=5353, max=90164, avg=12117.57, stdev=9244.75 00:31:25.853 lat (usec): min=5359, max=90170, avg=12124.77, stdev=9244.72 00:31:25.853 clat percentiles (usec): 00:31:25.853 | 1.00th=[ 5997], 5.00th=[ 6652], 10.00th=[ 7373], 20.00th=[ 8291], 00:31:25.853 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10552], 60.00th=[11076], 00:31:25.853 | 70.00th=[11600], 80.00th=[12125], 90.00th=[13042], 95.00th=[14615], 00:31:25.853 | 99.00th=[52691], 99.50th=[53216], 99.90th=[53740], 99.95th=[88605], 00:31:25.853 | 99.99th=[89654] 00:31:25.853 bw ( KiB/s): min=21504, max=41216, per=43.67%, avg=31932.63, stdev=4934.48, samples=19 00:31:25.853 iops : min= 168, max= 322, avg=249.47, stdev=38.55, samples=19 00:31:25.853 lat (msec) : 10=44.00%, 20=51.11%, 50=1.37%, 100=3.52% 00:31:25.853 cpu : usr=95.98%, sys=3.76%, ctx=19, majf=0, minf=103 00:31:25.853 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:25.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.853 issued rwts: total=2475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.853 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:25.853 filename0: (groupid=0, jobs=1): err= 0: pid=1304588: Mon Jul 15 14:01:50 2024 00:31:25.853 read: IOPS=173, BW=21.7MiB/s (22.7MB/s)(218MiB/10049msec) 00:31:25.853 slat (nsec): min=5629, max=32092, avg=6799.09, stdev=1325.81 00:31:25.853 clat (msec): min=6, max=133, avg=17.26, stdev=13.32 00:31:25.853 lat (msec): min=6, max=133, avg=17.27, stdev=13.32 00:31:25.853 clat percentiles (msec): 00:31:25.853 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 12], 00:31:25.853 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 16], 00:31:25.853 | 70.00th=[ 16], 80.00th=[ 17], 90.00th=[ 18], 95.00th=[ 55], 00:31:25.853 | 99.00th=[ 59], 99.50th=[ 95], 99.90th=[ 100], 99.95th=[ 134], 00:31:25.853 | 99.99th=[ 134] 00:31:25.853 bw ( KiB/s): min=15360, max=30464, per=30.47%, avg=22284.80, stdev=4517.65, samples=20 00:31:25.853 iops : min= 120, max= 238, avg=174.10, stdev=35.29, samples=20 00:31:25.853 lat (msec) : 10=9.87%, 20=82.33%, 100=7.75%, 250=0.06% 00:31:25.853 cpu : usr=95.93%, sys=3.81%, ctx=32, majf=0, minf=238 00:31:25.853 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:25.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.853 issued rwts: total=1743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.853 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:25.853 00:31:25.853 Run status group 0 (all jobs): 00:31:25.853 READ: bw=71.4MiB/s (74.9MB/s), 19.0MiB/s-30.9MiB/s (19.9MB/s-32.4MB/s), io=718MiB (752MB), run=10005-10049msec 00:31:25.853 14:01:50 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:25.853 14:01:50 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:25.853 14:01:50 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:25.853 14:01:50 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:25.853 14:01:50 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:25.853 14:01:50 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:25.853 14:01:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.853 14:01:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:25.853 14:01:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.853 14:01:50 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:25.853 14:01:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.853 14:01:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:25.853 14:01:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.853 00:31:25.853 real 0m11.215s 00:31:25.853 user 0m42.092s 00:31:25.853 sys 0m1.426s 00:31:25.853 14:01:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:25.853 14:01:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:25.853 ************************************ 00:31:25.853 END TEST fio_dif_digest 00:31:25.853 ************************************ 00:31:25.853 14:01:50 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:25.853 14:01:50 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:25.853 14:01:50 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:25.853 14:01:50 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:25.853 14:01:50 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:25.853 14:01:50 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:25.853 14:01:50 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:25.853 14:01:50 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:25.853 14:01:50 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:25.853 rmmod nvme_tcp 00:31:25.853 rmmod nvme_fabrics 00:31:25.853 rmmod nvme_keyring 00:31:25.853 14:01:50 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:25.853 14:01:50 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:25.853 14:01:50 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:25.853 14:01:50 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1294188 ']' 00:31:25.853 14:01:50 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1294188 00:31:25.853 14:01:50 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1294188 ']' 00:31:25.853 14:01:50 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1294188 00:31:25.853 14:01:50 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:31:25.853 14:01:50 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:25.853 14:01:50 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1294188 00:31:25.853 14:01:50 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:25.853 14:01:50 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:25.853 14:01:50 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1294188' 00:31:25.853 killing process with pid 1294188 00:31:25.853 14:01:50 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1294188 00:31:25.853 14:01:50 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1294188 00:31:25.853 14:01:51 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:25.853 14:01:51 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:27.769 Waiting for block devices as requested 00:31:27.769 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:27.769 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:27.769 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:28.030 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:28.030 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:28.030 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:28.030 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:28.290 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:28.290 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:28.550 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:28.550 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:28.550 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:28.811 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:28.811 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:28.811 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:28.811 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:29.071 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:29.331 14:01:55 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:29.331 14:01:55 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:29.331 14:01:55 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:29.331 14:01:55 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:29.331 14:01:55 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.331 14:01:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:29.331 14:01:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.237 14:01:57 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:31.237 00:31:31.237 real 1m17.307s 00:31:31.237 user 7m55.685s 00:31:31.237 sys 0m19.683s 00:31:31.237 14:01:57 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:31.237 14:01:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:31.237 ************************************ 00:31:31.237 END TEST nvmf_dif 00:31:31.237 ************************************ 00:31:31.237 14:01:57 -- common/autotest_common.sh@1142 -- # return 0 00:31:31.237 14:01:57 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:31.237 14:01:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:31.237 14:01:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:31.237 14:01:57 -- common/autotest_common.sh@10 -- # set +x 00:31:31.499 ************************************ 00:31:31.499 START TEST nvmf_abort_qd_sizes 00:31:31.499 ************************************ 00:31:31.499 14:01:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:31.499 * Looking for test storage... 00:31:31.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:31.499 14:01:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:31.499 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:31.499 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:31.499 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:31.499 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:31.499 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:31.499 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:31.499 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:31.499 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:31.499 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:31.499 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:31.499 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:31.499 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:31.500 14:01:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:39.644 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:39.644 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:39.644 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:39.645 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:39.645 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:39.645 14:02:04 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:39.645 14:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:39.645 14:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:39.645 14:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:39.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:39.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:31:39.645 00:31:39.645 --- 10.0.0.2 ping statistics --- 00:31:39.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.645 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:31:39.645 14:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:39.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:39.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.398 ms 00:31:39.645 00:31:39.645 --- 10.0.0.1 ping statistics --- 00:31:39.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.645 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:31:39.645 14:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:39.645 14:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:39.645 14:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:39.645 14:02:05 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:42.193 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:42.193 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:42.193 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:42.193 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:42.193 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:42.193 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:42.193 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:42.193 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:42.193 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:42.193 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:42.193 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:42.193 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:42.193 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:42.193 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:42.193 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:42.193 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:42.193 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:42.454 14:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:42.454 14:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:42.454 14:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:42.454 14:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:42.454 14:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:42.454 14:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:42.454 14:02:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:42.454 14:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:42.454 14:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:42.454 14:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:42.715 14:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1313808 00:31:42.715 14:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1313808 00:31:42.715 14:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:42.715 14:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1313808 ']' 00:31:42.715 14:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.715 14:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:42.715 14:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.715 14:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:42.715 14:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:42.715 [2024-07-15 14:02:09.038170] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:31:42.715 [2024-07-15 14:02:09.038223] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:42.715 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.715 [2024-07-15 14:02:09.106663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:42.715 [2024-07-15 14:02:09.176967] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:42.715 [2024-07-15 14:02:09.177002] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:42.715 [2024-07-15 14:02:09.177010] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:42.715 [2024-07-15 14:02:09.177016] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:42.715 [2024-07-15 14:02:09.177022] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:42.715 [2024-07-15 14:02:09.177173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:42.715 [2024-07-15 14:02:09.177230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:42.715 [2024-07-15 14:02:09.177393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.715 [2024-07-15 14:02:09.177394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:43.656 14:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:43.656 ************************************ 00:31:43.656 START TEST spdk_target_abort 00:31:43.656 ************************************ 00:31:43.656 14:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:31:43.656 14:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:43.656 14:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:31:43.656 14:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.656 14:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:43.916 spdk_targetn1 00:31:43.916 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.916 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:43.916 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.916 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:43.916 [2024-07-15 14:02:10.213257] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:43.916 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.916 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:43.916 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.916 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:43.916 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.916 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:43.916 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.916 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:43.916 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.916 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:43.917 [2024-07-15 14:02:10.253533] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:43.917 14:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:43.917 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.917 [2024-07-15 14:02:10.435619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:568 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:31:43.917 [2024-07-15 14:02:10.435646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:004a p:1 m:0 dnr:0 00:31:44.177 [2024-07-15 14:02:10.530723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2344 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:31:44.177 [2024-07-15 14:02:10.530744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:44.177 [2024-07-15 14:02:10.537624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2544 len:8 PRP1 0x2000078be000 PRP2 0x0 00:31:44.177 [2024-07-15 14:02:10.537640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:47.524 Initializing NVMe Controllers 00:31:47.524 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:47.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:47.524 Initialization complete. Launching workers. 00:31:47.524 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11610, failed: 3 00:31:47.524 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2583, failed to submit 9030 00:31:47.524 success 779, unsuccess 1804, failed 0 00:31:47.524 14:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:47.524 14:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:47.524 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.524 [2024-07-15 14:02:13.593264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:624 len:8 PRP1 0x200007c5a000 PRP2 0x0 00:31:47.524 [2024-07-15 14:02:13.593304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:31:47.524 [2024-07-15 14:02:13.601404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:888 len:8 PRP1 0x200007c4e000 PRP2 0x0 00:31:47.524 [2024-07-15 14:02:13.601427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:0072 p:1 m:0 dnr:0 00:31:47.524 [2024-07-15 14:02:13.673254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:2392 len:8 PRP1 0x200007c52000 PRP2 0x0 00:31:47.524 [2024-07-15 14:02:13.673278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:47.524 [2024-07-15 14:02:13.697237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:3032 len:8 PRP1 0x200007c42000 PRP2 0x0 00:31:47.524 [2024-07-15 14:02:13.697259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:50.821 Initializing NVMe Controllers 00:31:50.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:50.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:50.821 Initialization complete. Launching workers. 00:31:50.821 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8521, failed: 4 00:31:50.821 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1266, failed to submit 7259 00:31:50.821 success 316, unsuccess 950, failed 0 00:31:50.821 14:02:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:50.821 14:02:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:50.821 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.119 Initializing NVMe Controllers 00:31:54.119 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:54.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:54.119 Initialization complete. Launching workers. 00:31:54.119 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42004, failed: 0 00:31:54.119 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2692, failed to submit 39312 00:31:54.119 success 629, unsuccess 2063, failed 0 00:31:54.119 14:02:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:54.119 14:02:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.119 14:02:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:54.119 14:02:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.119 14:02:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:54.119 14:02:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.119 14:02:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:55.504 14:02:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.504 14:02:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1313808 00:31:55.504 14:02:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1313808 ']' 00:31:55.504 14:02:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1313808 00:31:55.504 14:02:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:31:55.504 14:02:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:55.504 14:02:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1313808 00:31:55.504 14:02:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:55.504 14:02:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:55.504 14:02:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1313808' 00:31:55.504 killing process with pid 1313808 00:31:55.504 14:02:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1313808 00:31:55.504 14:02:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1313808 00:31:55.766 00:31:55.766 real 0m12.171s 00:31:55.766 user 0m49.317s 00:31:55.766 sys 0m1.974s 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:55.766 ************************************ 00:31:55.766 END TEST spdk_target_abort 00:31:55.766 ************************************ 00:31:55.766 14:02:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:31:55.766 14:02:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:55.766 14:02:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:55.766 14:02:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:55.766 14:02:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:55.766 ************************************ 00:31:55.766 START TEST kernel_target_abort 00:31:55.766 ************************************ 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:55.766 14:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:59.067 Waiting for block devices as requested 00:31:59.067 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:59.067 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:59.067 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:59.067 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:59.328 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:59.328 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:59.328 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:59.589 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:59.589 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:59.850 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:59.850 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:59.850 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:00.110 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:00.110 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:00.110 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:00.110 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:00.371 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:00.632 14:02:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:00.632 14:02:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:00.632 14:02:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:00.632 14:02:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:00.632 14:02:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:00.632 14:02:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:00.632 14:02:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:00.632 14:02:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:00.632 14:02:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:00.632 No valid GPT data, bailing 00:32:00.632 14:02:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:00.632 14:02:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:00.632 14:02:26 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:00.632 14:02:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:00.632 14:02:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:00.632 14:02:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:00.632 14:02:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:32:00.632 00:32:00.632 Discovery Log Number of Records 2, Generation counter 2 00:32:00.632 =====Discovery Log Entry 0====== 00:32:00.632 trtype: tcp 00:32:00.632 adrfam: ipv4 00:32:00.632 subtype: current discovery subsystem 00:32:00.632 treq: not specified, sq flow control disable supported 00:32:00.632 portid: 1 00:32:00.632 trsvcid: 4420 00:32:00.632 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:00.632 traddr: 10.0.0.1 00:32:00.632 eflags: none 00:32:00.632 sectype: none 00:32:00.632 =====Discovery Log Entry 1====== 00:32:00.632 trtype: tcp 00:32:00.632 adrfam: ipv4 00:32:00.632 subtype: nvme subsystem 00:32:00.632 treq: not specified, sq flow control disable supported 00:32:00.632 portid: 1 00:32:00.632 trsvcid: 4420 00:32:00.632 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:00.632 traddr: 10.0.0.1 00:32:00.632 eflags: none 00:32:00.632 sectype: none 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:00.632 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:00.633 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:00.633 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:00.633 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:00.633 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:00.633 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:00.633 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:00.633 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:00.633 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:00.633 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:00.633 14:02:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:00.633 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.933 Initializing NVMe Controllers 00:32:03.933 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:03.933 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:03.933 Initialization complete. Launching workers. 00:32:03.933 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 48681, failed: 0 00:32:03.933 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 48681, failed to submit 0 00:32:03.933 success 0, unsuccess 48681, failed 0 00:32:03.933 14:02:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:03.933 14:02:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:03.933 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.230 Initializing NVMe Controllers 00:32:07.230 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:07.230 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:07.230 Initialization complete. Launching workers. 00:32:07.230 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 89565, failed: 0 00:32:07.230 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22546, failed to submit 67019 00:32:07.230 success 0, unsuccess 22546, failed 0 00:32:07.230 14:02:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:07.230 14:02:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:07.230 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.578 Initializing NVMe Controllers 00:32:10.578 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:10.578 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:10.578 Initialization complete. Launching workers. 00:32:10.578 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 86330, failed: 0 00:32:10.578 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21550, failed to submit 64780 00:32:10.578 success 0, unsuccess 21550, failed 0 00:32:10.578 14:02:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:10.578 14:02:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:10.578 14:02:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:10.578 14:02:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:10.578 14:02:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:10.578 14:02:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:10.578 14:02:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:10.578 14:02:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:10.578 14:02:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:10.578 14:02:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:13.882 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:13.882 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:13.882 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:13.882 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:13.882 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:13.882 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:13.882 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:13.882 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:13.882 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:13.882 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:13.882 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:13.882 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:13.882 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:13.882 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:13.882 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:13.882 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:15.267 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:15.839 00:32:15.839 real 0m19.955s 00:32:15.839 user 0m8.167s 00:32:15.839 sys 0m6.293s 00:32:15.839 14:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:15.839 14:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:15.839 ************************************ 00:32:15.839 END TEST kernel_target_abort 00:32:15.839 ************************************ 00:32:15.839 14:02:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:15.839 14:02:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:15.839 14:02:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:15.839 14:02:42 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:15.839 14:02:42 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:15.839 14:02:42 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:15.839 14:02:42 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:15.839 14:02:42 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:15.839 14:02:42 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:15.839 rmmod nvme_tcp 00:32:15.839 rmmod nvme_fabrics 00:32:15.839 rmmod nvme_keyring 00:32:15.839 14:02:42 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:15.839 14:02:42 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:15.839 14:02:42 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:15.839 14:02:42 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1313808 ']' 00:32:15.839 14:02:42 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1313808 00:32:15.839 14:02:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1313808 ']' 00:32:15.839 14:02:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1313808 00:32:15.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1313808) - No such process 00:32:15.839 14:02:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1313808 is not found' 00:32:15.839 Process with pid 1313808 is not found 00:32:15.839 14:02:42 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:15.839 14:02:42 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:18.385 Waiting for block devices as requested 00:32:18.385 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:18.646 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:18.646 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:18.646 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:18.906 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:18.906 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:18.906 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:19.167 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:19.167 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:19.428 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:19.428 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:19.428 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:19.428 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:19.689 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:19.689 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:19.689 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:19.689 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:19.950 14:02:46 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:19.950 14:02:46 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:19.950 14:02:46 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:19.950 14:02:46 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:19.950 14:02:46 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.950 14:02:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:19.950 14:02:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.494 14:02:48 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:22.494 00:32:22.494 real 0m50.738s 00:32:22.494 user 1m2.204s 00:32:22.494 sys 0m18.612s 00:32:22.494 14:02:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:22.494 14:02:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:22.494 ************************************ 00:32:22.494 END TEST nvmf_abort_qd_sizes 00:32:22.494 ************************************ 00:32:22.494 14:02:48 -- common/autotest_common.sh@1142 -- # return 0 00:32:22.494 14:02:48 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:22.494 14:02:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:22.494 14:02:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:22.494 14:02:48 -- common/autotest_common.sh@10 -- # set +x 00:32:22.494 ************************************ 00:32:22.494 START TEST keyring_file 00:32:22.494 ************************************ 00:32:22.494 14:02:48 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:22.494 * Looking for test storage... 00:32:22.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:22.494 14:02:48 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:22.494 14:02:48 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.494 14:02:48 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.494 14:02:48 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.494 14:02:48 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.494 14:02:48 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.494 14:02:48 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.494 14:02:48 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:22.494 14:02:48 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:22.494 14:02:48 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:22.494 14:02:48 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:22.494 14:02:48 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:22.494 14:02:48 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:22.494 14:02:48 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:22.494 14:02:48 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TbHiCqR4wC 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TbHiCqR4wC 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TbHiCqR4wC 00:32:22.494 14:02:48 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.TbHiCqR4wC 00:32:22.494 14:02:48 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eetRVS4fB9 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:22.494 14:02:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eetRVS4fB9 00:32:22.494 14:02:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eetRVS4fB9 00:32:22.494 14:02:48 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.eetRVS4fB9 00:32:22.494 14:02:48 keyring_file -- keyring/file.sh@30 -- # tgtpid=1323994 00:32:22.494 14:02:48 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1323994 00:32:22.494 14:02:48 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1323994 ']' 00:32:22.494 14:02:48 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.494 14:02:48 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:22.494 14:02:48 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.494 14:02:48 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:22.494 14:02:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:22.494 14:02:48 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:22.494 [2024-07-15 14:02:48.887383] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:32:22.494 [2024-07-15 14:02:48.887452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1323994 ] 00:32:22.494 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.494 [2024-07-15 14:02:48.951955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.754 [2024-07-15 14:02:49.025832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.324 14:02:49 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:23.324 14:02:49 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:23.324 14:02:49 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:23.324 14:02:49 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.324 14:02:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:23.324 [2024-07-15 14:02:49.634102] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:23.324 null0 00:32:23.324 [2024-07-15 14:02:49.666146] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:23.324 [2024-07-15 14:02:49.666431] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:23.324 [2024-07-15 14:02:49.674160] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:23.324 14:02:49 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.324 14:02:49 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:23.324 14:02:49 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:23.324 14:02:49 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:23.324 14:02:49 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:23.324 14:02:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:23.324 14:02:49 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:23.324 14:02:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:23.324 14:02:49 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:23.324 14:02:49 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.324 14:02:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:23.324 [2024-07-15 14:02:49.686186] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:23.324 request: 00:32:23.324 { 00:32:23.324 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:23.324 "secure_channel": false, 00:32:23.324 "listen_address": { 00:32:23.324 "trtype": "tcp", 00:32:23.324 "traddr": "127.0.0.1", 00:32:23.324 "trsvcid": "4420" 00:32:23.324 }, 00:32:23.324 "method": "nvmf_subsystem_add_listener", 00:32:23.324 "req_id": 1 00:32:23.324 } 00:32:23.324 Got JSON-RPC error response 00:32:23.324 response: 00:32:23.324 { 00:32:23.324 "code": -32602, 00:32:23.324 "message": "Invalid parameters" 00:32:23.324 } 00:32:23.324 14:02:49 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:23.324 14:02:49 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:23.325 14:02:49 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:23.325 14:02:49 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:23.325 14:02:49 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:23.325 14:02:49 keyring_file -- keyring/file.sh@46 -- # bperfpid=1324326 00:32:23.325 14:02:49 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1324326 /var/tmp/bperf.sock 00:32:23.325 14:02:49 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1324326 ']' 00:32:23.325 14:02:49 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:23.325 14:02:49 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:23.325 14:02:49 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:23.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:23.325 14:02:49 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:23.325 14:02:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:23.325 14:02:49 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:23.325 [2024-07-15 14:02:49.739032] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:32:23.325 [2024-07-15 14:02:49.739078] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1324326 ] 00:32:23.325 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.325 [2024-07-15 14:02:49.813867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.585 [2024-07-15 14:02:49.877784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:24.155 14:02:50 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:24.155 14:02:50 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:24.155 14:02:50 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TbHiCqR4wC 00:32:24.155 14:02:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TbHiCqR4wC 00:32:24.155 14:02:50 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eetRVS4fB9 00:32:24.155 14:02:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eetRVS4fB9 00:32:24.416 14:02:50 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:24.416 14:02:50 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:24.416 14:02:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:24.416 14:02:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.416 14:02:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:24.677 14:02:50 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.TbHiCqR4wC == \/\t\m\p\/\t\m\p\.\T\b\H\i\C\q\R\4\w\C ]] 00:32:24.677 14:02:50 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:24.677 14:02:50 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:24.677 14:02:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:24.677 14:02:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.677 14:02:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:24.677 14:02:51 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.eetRVS4fB9 == \/\t\m\p\/\t\m\p\.\e\e\t\R\V\S\4\f\B\9 ]] 00:32:24.677 14:02:51 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:24.677 14:02:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:24.677 14:02:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:24.677 14:02:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:24.677 14:02:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.677 14:02:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:24.937 14:02:51 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:24.937 14:02:51 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:24.937 14:02:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:24.937 14:02:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:24.937 14:02:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:24.937 14:02:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.937 14:02:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:24.937 14:02:51 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:24.937 14:02:51 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:24.937 14:02:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:25.197 [2024-07-15 14:02:51.566522] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:25.197 nvme0n1 00:32:25.197 14:02:51 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:25.197 14:02:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:25.197 14:02:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:25.197 14:02:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:25.197 14:02:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:25.197 14:02:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:25.458 14:02:51 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:25.458 14:02:51 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:25.458 14:02:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:25.458 14:02:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:25.458 14:02:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:25.458 14:02:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:25.458 14:02:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:25.458 14:02:51 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:25.458 14:02:51 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:25.718 Running I/O for 1 seconds... 00:32:26.659 00:32:26.659 Latency(us) 00:32:26.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.659 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:26.659 nvme0n1 : 1.02 7698.50 30.07 0.00 0.00 16482.85 4696.75 22828.37 00:32:26.659 =================================================================================================================== 00:32:26.660 Total : 7698.50 30.07 0.00 0.00 16482.85 4696.75 22828.37 00:32:26.660 0 00:32:26.660 14:02:53 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:26.660 14:02:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:26.919 14:02:53 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:26.919 14:02:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:26.919 14:02:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:26.919 14:02:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:26.919 14:02:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:26.919 14:02:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:26.919 14:02:53 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:26.919 14:02:53 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:26.919 14:02:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:26.919 14:02:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:26.919 14:02:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:26.919 14:02:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:26.919 14:02:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:27.180 14:02:53 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:27.180 14:02:53 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:27.180 14:02:53 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:27.180 14:02:53 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:27.180 14:02:53 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:27.180 14:02:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:27.180 14:02:53 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:27.180 14:02:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:27.180 14:02:53 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:27.180 14:02:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:27.441 [2024-07-15 14:02:53.727856] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:27.441 [2024-07-15 14:02:53.728305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1e9d0 (107): Transport endpoint is not connected 00:32:27.441 [2024-07-15 14:02:53.729301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1e9d0 (9): Bad file descriptor 00:32:27.441 [2024-07-15 14:02:53.730302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:27.441 [2024-07-15 14:02:53.730309] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:27.441 [2024-07-15 14:02:53.730315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:27.441 request: 00:32:27.441 { 00:32:27.441 "name": "nvme0", 00:32:27.441 "trtype": "tcp", 00:32:27.441 "traddr": "127.0.0.1", 00:32:27.441 "adrfam": "ipv4", 00:32:27.441 "trsvcid": "4420", 00:32:27.441 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:27.441 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:27.441 "prchk_reftag": false, 00:32:27.441 "prchk_guard": false, 00:32:27.441 "hdgst": false, 00:32:27.441 "ddgst": false, 00:32:27.441 "psk": "key1", 00:32:27.441 "method": "bdev_nvme_attach_controller", 00:32:27.441 "req_id": 1 00:32:27.441 } 00:32:27.441 Got JSON-RPC error response 00:32:27.441 response: 00:32:27.441 { 00:32:27.441 "code": -5, 00:32:27.441 "message": "Input/output error" 00:32:27.441 } 00:32:27.441 14:02:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:27.441 14:02:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:27.441 14:02:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:27.441 14:02:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:27.441 14:02:53 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:27.441 14:02:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:27.441 14:02:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:27.441 14:02:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:27.441 14:02:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:27.441 14:02:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:27.441 14:02:53 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:27.441 14:02:53 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:27.441 14:02:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:27.441 14:02:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:27.441 14:02:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:27.441 14:02:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:27.441 14:02:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:27.701 14:02:54 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:27.701 14:02:54 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:27.701 14:02:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:27.997 14:02:54 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:27.997 14:02:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:27.997 14:02:54 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:27.997 14:02:54 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:27.997 14:02:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:28.264 14:02:54 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:28.264 14:02:54 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.TbHiCqR4wC 00:32:28.265 14:02:54 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.TbHiCqR4wC 00:32:28.265 14:02:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:28.265 14:02:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.TbHiCqR4wC 00:32:28.265 14:02:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:28.265 14:02:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:28.265 14:02:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:28.265 14:02:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:28.265 14:02:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TbHiCqR4wC 00:32:28.265 14:02:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TbHiCqR4wC 00:32:28.265 [2024-07-15 14:02:54.695075] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.TbHiCqR4wC': 0100660 00:32:28.265 [2024-07-15 14:02:54.695092] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:28.265 request: 00:32:28.265 { 00:32:28.265 "name": "key0", 00:32:28.265 "path": "/tmp/tmp.TbHiCqR4wC", 00:32:28.265 "method": "keyring_file_add_key", 00:32:28.265 "req_id": 1 00:32:28.265 } 00:32:28.265 Got JSON-RPC error response 00:32:28.265 response: 00:32:28.265 { 00:32:28.265 "code": -1, 00:32:28.265 "message": "Operation not permitted" 00:32:28.265 } 00:32:28.265 14:02:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:28.265 14:02:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:28.265 14:02:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:28.265 14:02:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:28.265 14:02:54 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.TbHiCqR4wC 00:32:28.265 14:02:54 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TbHiCqR4wC 00:32:28.265 14:02:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TbHiCqR4wC 00:32:28.526 14:02:54 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.TbHiCqR4wC 00:32:28.526 14:02:54 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:28.526 14:02:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:28.526 14:02:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:28.526 14:02:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:28.526 14:02:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:28.526 14:02:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:28.526 14:02:55 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:28.526 14:02:55 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:28.526 14:02:55 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:28.526 14:02:55 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:28.526 14:02:55 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:28.526 14:02:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:28.526 14:02:55 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:28.526 14:02:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:28.526 14:02:55 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:28.526 14:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:28.785 [2024-07-15 14:02:55.168285] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.TbHiCqR4wC': No such file or directory 00:32:28.786 [2024-07-15 14:02:55.168299] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:28.786 [2024-07-15 14:02:55.168315] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:28.786 [2024-07-15 14:02:55.168320] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:28.786 [2024-07-15 14:02:55.168325] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:28.786 request: 00:32:28.786 { 00:32:28.786 "name": "nvme0", 00:32:28.786 "trtype": "tcp", 00:32:28.786 "traddr": "127.0.0.1", 00:32:28.786 "adrfam": "ipv4", 00:32:28.786 "trsvcid": "4420", 00:32:28.786 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:28.786 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:28.786 "prchk_reftag": false, 00:32:28.786 "prchk_guard": false, 00:32:28.786 "hdgst": false, 00:32:28.786 "ddgst": false, 00:32:28.786 "psk": "key0", 00:32:28.786 "method": "bdev_nvme_attach_controller", 00:32:28.786 "req_id": 1 00:32:28.786 } 00:32:28.786 Got JSON-RPC error response 00:32:28.786 response: 00:32:28.786 { 00:32:28.786 "code": -19, 00:32:28.786 "message": "No such device" 00:32:28.786 } 00:32:28.786 14:02:55 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:28.786 14:02:55 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:28.786 14:02:55 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:28.786 14:02:55 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:28.786 14:02:55 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:28.786 14:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:29.046 14:02:55 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:29.046 14:02:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:29.046 14:02:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:29.046 14:02:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:29.046 14:02:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:29.046 14:02:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:29.046 14:02:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2jqtJU4tAn 00:32:29.046 14:02:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:29.046 14:02:55 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:29.046 14:02:55 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:29.046 14:02:55 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:29.046 14:02:55 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:29.046 14:02:55 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:29.046 14:02:55 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:29.046 14:02:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2jqtJU4tAn 00:32:29.046 14:02:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2jqtJU4tAn 00:32:29.046 14:02:55 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.2jqtJU4tAn 00:32:29.046 14:02:55 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2jqtJU4tAn 00:32:29.046 14:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2jqtJU4tAn 00:32:29.046 14:02:55 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:29.046 14:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:29.306 nvme0n1 00:32:29.306 14:02:55 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:29.306 14:02:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:29.306 14:02:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:29.306 14:02:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:29.306 14:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.306 14:02:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:29.567 14:02:55 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:29.567 14:02:55 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:29.567 14:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:29.567 14:02:56 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:29.567 14:02:56 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:29.567 14:02:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:29.567 14:02:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:29.567 14:02:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.827 14:02:56 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:29.827 14:02:56 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:29.827 14:02:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:29.827 14:02:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:29.827 14:02:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:29.827 14:02:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:29.827 14:02:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:30.131 14:02:56 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:30.131 14:02:56 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:30.131 14:02:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:30.131 14:02:56 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:30.131 14:02:56 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:30.131 14:02:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:30.393 14:02:56 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:30.393 14:02:56 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2jqtJU4tAn 00:32:30.393 14:02:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2jqtJU4tAn 00:32:30.393 14:02:56 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eetRVS4fB9 00:32:30.393 14:02:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eetRVS4fB9 00:32:30.653 14:02:57 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:30.653 14:02:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:30.914 nvme0n1 00:32:30.914 14:02:57 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:30.914 14:02:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:31.176 14:02:57 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:31.176 "subsystems": [ 00:32:31.176 { 00:32:31.176 "subsystem": "keyring", 00:32:31.176 "config": [ 00:32:31.176 { 00:32:31.176 "method": "keyring_file_add_key", 00:32:31.176 "params": { 00:32:31.176 "name": "key0", 00:32:31.176 "path": "/tmp/tmp.2jqtJU4tAn" 00:32:31.176 } 00:32:31.176 }, 00:32:31.176 { 00:32:31.176 "method": "keyring_file_add_key", 00:32:31.176 "params": { 00:32:31.176 "name": "key1", 00:32:31.176 "path": "/tmp/tmp.eetRVS4fB9" 00:32:31.176 } 00:32:31.176 } 00:32:31.176 ] 00:32:31.176 }, 00:32:31.176 { 00:32:31.176 "subsystem": "iobuf", 00:32:31.176 "config": [ 00:32:31.176 { 00:32:31.176 "method": "iobuf_set_options", 00:32:31.176 "params": { 00:32:31.176 "small_pool_count": 8192, 00:32:31.176 "large_pool_count": 1024, 00:32:31.176 "small_bufsize": 8192, 00:32:31.176 "large_bufsize": 135168 00:32:31.176 } 00:32:31.176 } 00:32:31.176 ] 00:32:31.176 }, 00:32:31.176 { 00:32:31.176 "subsystem": "sock", 00:32:31.176 "config": [ 00:32:31.176 { 00:32:31.176 "method": "sock_set_default_impl", 00:32:31.176 "params": { 00:32:31.176 "impl_name": "posix" 00:32:31.176 } 00:32:31.176 }, 00:32:31.176 { 00:32:31.176 "method": "sock_impl_set_options", 00:32:31.176 "params": { 00:32:31.176 "impl_name": "ssl", 00:32:31.176 "recv_buf_size": 4096, 00:32:31.176 "send_buf_size": 4096, 00:32:31.176 "enable_recv_pipe": true, 00:32:31.176 "enable_quickack": false, 00:32:31.176 "enable_placement_id": 0, 00:32:31.176 "enable_zerocopy_send_server": true, 00:32:31.176 "enable_zerocopy_send_client": false, 00:32:31.176 "zerocopy_threshold": 0, 00:32:31.176 "tls_version": 0, 00:32:31.176 "enable_ktls": false 00:32:31.176 } 00:32:31.176 }, 00:32:31.176 { 00:32:31.176 "method": "sock_impl_set_options", 00:32:31.176 "params": { 00:32:31.176 "impl_name": "posix", 00:32:31.176 "recv_buf_size": 2097152, 00:32:31.176 "send_buf_size": 2097152, 00:32:31.176 "enable_recv_pipe": true, 00:32:31.176 "enable_quickack": false, 00:32:31.176 "enable_placement_id": 0, 00:32:31.176 "enable_zerocopy_send_server": true, 00:32:31.176 "enable_zerocopy_send_client": false, 00:32:31.176 "zerocopy_threshold": 0, 00:32:31.176 "tls_version": 0, 00:32:31.176 "enable_ktls": false 00:32:31.177 } 00:32:31.177 } 00:32:31.177 ] 00:32:31.177 }, 00:32:31.177 { 00:32:31.177 "subsystem": "vmd", 00:32:31.177 "config": [] 00:32:31.177 }, 00:32:31.177 { 00:32:31.177 "subsystem": "accel", 00:32:31.177 "config": [ 00:32:31.177 { 00:32:31.177 "method": "accel_set_options", 00:32:31.177 "params": { 00:32:31.177 "small_cache_size": 128, 00:32:31.177 "large_cache_size": 16, 00:32:31.177 "task_count": 2048, 00:32:31.177 "sequence_count": 2048, 00:32:31.177 "buf_count": 2048 00:32:31.177 } 00:32:31.177 } 00:32:31.177 ] 00:32:31.177 }, 00:32:31.177 { 00:32:31.177 "subsystem": "bdev", 00:32:31.177 "config": [ 00:32:31.177 { 00:32:31.177 "method": "bdev_set_options", 00:32:31.177 "params": { 00:32:31.177 "bdev_io_pool_size": 65535, 00:32:31.177 "bdev_io_cache_size": 256, 00:32:31.177 "bdev_auto_examine": true, 00:32:31.177 "iobuf_small_cache_size": 128, 00:32:31.177 "iobuf_large_cache_size": 16 00:32:31.177 } 00:32:31.177 }, 00:32:31.177 { 00:32:31.177 "method": "bdev_raid_set_options", 00:32:31.177 "params": { 00:32:31.177 "process_window_size_kb": 1024 00:32:31.177 } 00:32:31.177 }, 00:32:31.177 { 00:32:31.177 "method": "bdev_iscsi_set_options", 00:32:31.177 "params": { 00:32:31.177 "timeout_sec": 30 00:32:31.177 } 00:32:31.177 }, 00:32:31.177 { 00:32:31.177 "method": "bdev_nvme_set_options", 00:32:31.177 "params": { 00:32:31.177 "action_on_timeout": "none", 00:32:31.177 "timeout_us": 0, 00:32:31.177 "timeout_admin_us": 0, 00:32:31.177 "keep_alive_timeout_ms": 10000, 00:32:31.177 "arbitration_burst": 0, 00:32:31.177 "low_priority_weight": 0, 00:32:31.177 "medium_priority_weight": 0, 00:32:31.177 "high_priority_weight": 0, 00:32:31.177 "nvme_adminq_poll_period_us": 10000, 00:32:31.177 "nvme_ioq_poll_period_us": 0, 00:32:31.177 "io_queue_requests": 512, 00:32:31.177 "delay_cmd_submit": true, 00:32:31.177 "transport_retry_count": 4, 00:32:31.177 "bdev_retry_count": 3, 00:32:31.177 "transport_ack_timeout": 0, 00:32:31.177 "ctrlr_loss_timeout_sec": 0, 00:32:31.177 "reconnect_delay_sec": 0, 00:32:31.177 "fast_io_fail_timeout_sec": 0, 00:32:31.177 "disable_auto_failback": false, 00:32:31.177 "generate_uuids": false, 00:32:31.177 "transport_tos": 0, 00:32:31.177 "nvme_error_stat": false, 00:32:31.177 "rdma_srq_size": 0, 00:32:31.177 "io_path_stat": false, 00:32:31.177 "allow_accel_sequence": false, 00:32:31.177 "rdma_max_cq_size": 0, 00:32:31.177 "rdma_cm_event_timeout_ms": 0, 00:32:31.177 "dhchap_digests": [ 00:32:31.177 "sha256", 00:32:31.177 "sha384", 00:32:31.177 "sha512" 00:32:31.177 ], 00:32:31.177 "dhchap_dhgroups": [ 00:32:31.177 "null", 00:32:31.177 "ffdhe2048", 00:32:31.177 "ffdhe3072", 00:32:31.177 "ffdhe4096", 00:32:31.177 "ffdhe6144", 00:32:31.177 "ffdhe8192" 00:32:31.177 ] 00:32:31.177 } 00:32:31.177 }, 00:32:31.177 { 00:32:31.177 "method": "bdev_nvme_attach_controller", 00:32:31.177 "params": { 00:32:31.177 "name": "nvme0", 00:32:31.177 "trtype": "TCP", 00:32:31.177 "adrfam": "IPv4", 00:32:31.177 "traddr": "127.0.0.1", 00:32:31.177 "trsvcid": "4420", 00:32:31.177 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:31.177 "prchk_reftag": false, 00:32:31.177 "prchk_guard": false, 00:32:31.177 "ctrlr_loss_timeout_sec": 0, 00:32:31.177 "reconnect_delay_sec": 0, 00:32:31.177 "fast_io_fail_timeout_sec": 0, 00:32:31.177 "psk": "key0", 00:32:31.177 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:31.177 "hdgst": false, 00:32:31.177 "ddgst": false 00:32:31.177 } 00:32:31.177 }, 00:32:31.177 { 00:32:31.177 "method": "bdev_nvme_set_hotplug", 00:32:31.177 "params": { 00:32:31.177 "period_us": 100000, 00:32:31.177 "enable": false 00:32:31.177 } 00:32:31.177 }, 00:32:31.177 { 00:32:31.177 "method": "bdev_wait_for_examine" 00:32:31.177 } 00:32:31.177 ] 00:32:31.177 }, 00:32:31.177 { 00:32:31.177 "subsystem": "nbd", 00:32:31.177 "config": [] 00:32:31.177 } 00:32:31.177 ] 00:32:31.177 }' 00:32:31.177 14:02:57 keyring_file -- keyring/file.sh@114 -- # killprocess 1324326 00:32:31.177 14:02:57 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1324326 ']' 00:32:31.177 14:02:57 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1324326 00:32:31.177 14:02:57 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:31.177 14:02:57 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:31.177 14:02:57 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1324326 00:32:31.177 14:02:57 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:31.177 14:02:57 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:31.177 14:02:57 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1324326' 00:32:31.177 killing process with pid 1324326 00:32:31.177 14:02:57 keyring_file -- common/autotest_common.sh@967 -- # kill 1324326 00:32:31.177 Received shutdown signal, test time was about 1.000000 seconds 00:32:31.177 00:32:31.177 Latency(us) 00:32:31.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.177 =================================================================================================================== 00:32:31.177 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:31.177 14:02:57 keyring_file -- common/autotest_common.sh@972 -- # wait 1324326 00:32:31.177 14:02:57 keyring_file -- keyring/file.sh@117 -- # bperfpid=1325818 00:32:31.177 14:02:57 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1325818 /var/tmp/bperf.sock 00:32:31.177 14:02:57 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1325818 ']' 00:32:31.177 14:02:57 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:31.177 14:02:57 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:31.177 14:02:57 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:31.177 14:02:57 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:31.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:31.177 14:02:57 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:31.177 14:02:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:31.177 14:02:57 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:31.177 "subsystems": [ 00:32:31.177 { 00:32:31.177 "subsystem": "keyring", 00:32:31.177 "config": [ 00:32:31.177 { 00:32:31.177 "method": "keyring_file_add_key", 00:32:31.177 "params": { 00:32:31.177 "name": "key0", 00:32:31.177 "path": "/tmp/tmp.2jqtJU4tAn" 00:32:31.177 } 00:32:31.177 }, 00:32:31.177 { 00:32:31.177 "method": "keyring_file_add_key", 00:32:31.177 "params": { 00:32:31.177 "name": "key1", 00:32:31.177 "path": "/tmp/tmp.eetRVS4fB9" 00:32:31.177 } 00:32:31.177 } 00:32:31.177 ] 00:32:31.177 }, 00:32:31.177 { 00:32:31.177 "subsystem": "iobuf", 00:32:31.177 "config": [ 00:32:31.177 { 00:32:31.177 "method": "iobuf_set_options", 00:32:31.177 "params": { 00:32:31.177 "small_pool_count": 8192, 00:32:31.177 "large_pool_count": 1024, 00:32:31.177 "small_bufsize": 8192, 00:32:31.177 "large_bufsize": 135168 00:32:31.177 } 00:32:31.177 } 00:32:31.177 ] 00:32:31.177 }, 00:32:31.177 { 00:32:31.177 "subsystem": "sock", 00:32:31.177 "config": [ 00:32:31.177 { 00:32:31.177 "method": "sock_set_default_impl", 00:32:31.177 "params": { 00:32:31.177 "impl_name": "posix" 00:32:31.177 } 00:32:31.177 }, 00:32:31.177 { 00:32:31.177 "method": "sock_impl_set_options", 00:32:31.177 "params": { 00:32:31.177 "impl_name": "ssl", 00:32:31.177 "recv_buf_size": 4096, 00:32:31.177 "send_buf_size": 4096, 00:32:31.177 "enable_recv_pipe": true, 00:32:31.177 "enable_quickack": false, 00:32:31.177 "enable_placement_id": 0, 00:32:31.177 "enable_zerocopy_send_server": true, 00:32:31.177 "enable_zerocopy_send_client": false, 00:32:31.177 "zerocopy_threshold": 0, 00:32:31.177 "tls_version": 0, 00:32:31.177 "enable_ktls": false 00:32:31.177 } 00:32:31.177 }, 00:32:31.177 { 00:32:31.177 "method": "sock_impl_set_options", 00:32:31.177 "params": { 00:32:31.177 "impl_name": "posix", 00:32:31.177 "recv_buf_size": 2097152, 00:32:31.177 "send_buf_size": 2097152, 00:32:31.177 "enable_recv_pipe": true, 00:32:31.177 "enable_quickack": false, 00:32:31.177 "enable_placement_id": 0, 00:32:31.177 "enable_zerocopy_send_server": true, 00:32:31.177 "enable_zerocopy_send_client": false, 00:32:31.177 "zerocopy_threshold": 0, 00:32:31.177 "tls_version": 0, 00:32:31.178 "enable_ktls": false 00:32:31.178 } 00:32:31.178 } 00:32:31.178 ] 00:32:31.178 }, 00:32:31.178 { 00:32:31.178 "subsystem": "vmd", 00:32:31.178 "config": [] 00:32:31.178 }, 00:32:31.178 { 00:32:31.178 "subsystem": "accel", 00:32:31.178 "config": [ 00:32:31.178 { 00:32:31.178 "method": "accel_set_options", 00:32:31.178 "params": { 00:32:31.178 "small_cache_size": 128, 00:32:31.178 "large_cache_size": 16, 00:32:31.178 "task_count": 2048, 00:32:31.178 "sequence_count": 2048, 00:32:31.178 "buf_count": 2048 00:32:31.178 } 00:32:31.178 } 00:32:31.178 ] 00:32:31.178 }, 00:32:31.178 { 00:32:31.178 "subsystem": "bdev", 00:32:31.178 "config": [ 00:32:31.178 { 00:32:31.178 "method": "bdev_set_options", 00:32:31.178 "params": { 00:32:31.178 "bdev_io_pool_size": 65535, 00:32:31.178 "bdev_io_cache_size": 256, 00:32:31.178 "bdev_auto_examine": true, 00:32:31.178 "iobuf_small_cache_size": 128, 00:32:31.178 "iobuf_large_cache_size": 16 00:32:31.178 } 00:32:31.178 }, 00:32:31.178 { 00:32:31.178 "method": "bdev_raid_set_options", 00:32:31.178 "params": { 00:32:31.178 "process_window_size_kb": 1024 00:32:31.178 } 00:32:31.178 }, 00:32:31.178 { 00:32:31.178 "method": "bdev_iscsi_set_options", 00:32:31.178 "params": { 00:32:31.178 "timeout_sec": 30 00:32:31.178 } 00:32:31.178 }, 00:32:31.178 { 00:32:31.178 "method": "bdev_nvme_set_options", 00:32:31.178 "params": { 00:32:31.178 "action_on_timeout": "none", 00:32:31.178 "timeout_us": 0, 00:32:31.178 "timeout_admin_us": 0, 00:32:31.178 "keep_alive_timeout_ms": 10000, 00:32:31.178 "arbitration_burst": 0, 00:32:31.178 "low_priority_weight": 0, 00:32:31.178 "medium_priority_weight": 0, 00:32:31.178 "high_priority_weight": 0, 00:32:31.178 "nvme_adminq_poll_period_us": 10000, 00:32:31.178 "nvme_ioq_poll_period_us": 0, 00:32:31.178 "io_queue_requests": 512, 00:32:31.178 "delay_cmd_submit": true, 00:32:31.178 "transport_retry_count": 4, 00:32:31.178 "bdev_retry_count": 3, 00:32:31.178 "transport_ack_timeout": 0, 00:32:31.178 "ctrlr_loss_timeout_sec": 0, 00:32:31.178 "reconnect_delay_sec": 0, 00:32:31.178 "fast_io_fail_timeout_sec": 0, 00:32:31.178 "disable_auto_failback": false, 00:32:31.178 "generate_uuids": false, 00:32:31.178 "transport_tos": 0, 00:32:31.178 "nvme_error_stat": false, 00:32:31.178 "rdma_srq_size": 0, 00:32:31.178 "io_path_stat": false, 00:32:31.178 "allow_accel_sequence": false, 00:32:31.178 "rdma_max_cq_size": 0, 00:32:31.178 "rdma_cm_event_timeout_ms": 0, 00:32:31.178 "dhchap_digests": [ 00:32:31.178 "sha256", 00:32:31.178 "sha384", 00:32:31.178 "sha512" 00:32:31.178 ], 00:32:31.178 "dhchap_dhgroups": [ 00:32:31.178 "null", 00:32:31.178 "ffdhe2048", 00:32:31.178 "ffdhe3072", 00:32:31.178 "ffdhe4096", 00:32:31.178 "ffdhe6144", 00:32:31.178 "ffdhe8192" 00:32:31.178 ] 00:32:31.178 } 00:32:31.178 }, 00:32:31.178 { 00:32:31.178 "method": "bdev_nvme_attach_controller", 00:32:31.178 "params": { 00:32:31.178 "name": "nvme0", 00:32:31.178 "trtype": "TCP", 00:32:31.178 "adrfam": "IPv4", 00:32:31.178 "traddr": "127.0.0.1", 00:32:31.178 "trsvcid": "4420", 00:32:31.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:31.178 "prchk_reftag": false, 00:32:31.178 "prchk_guard": false, 00:32:31.178 "ctrlr_loss_timeout_sec": 0, 00:32:31.178 "reconnect_delay_sec": 0, 00:32:31.178 "fast_io_fail_timeout_sec": 0, 00:32:31.178 "psk": "key0", 00:32:31.178 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:31.178 "hdgst": false, 00:32:31.178 "ddgst": false 00:32:31.178 } 00:32:31.178 }, 00:32:31.178 { 00:32:31.178 "method": "bdev_nvme_set_hotplug", 00:32:31.178 "params": { 00:32:31.178 "period_us": 100000, 00:32:31.178 "enable": false 00:32:31.178 } 00:32:31.178 }, 00:32:31.178 { 00:32:31.178 "method": "bdev_wait_for_examine" 00:32:31.178 } 00:32:31.178 ] 00:32:31.178 }, 00:32:31.178 { 00:32:31.178 "subsystem": "nbd", 00:32:31.178 "config": [] 00:32:31.178 } 00:32:31.178 ] 00:32:31.178 }' 00:32:31.178 [2024-07-15 14:02:57.681617] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:32:31.178 [2024-07-15 14:02:57.681690] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1325818 ] 00:32:31.439 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.439 [2024-07-15 14:02:57.756403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.439 [2024-07-15 14:02:57.809372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.439 [2024-07-15 14:02:57.951090] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:32.009 14:02:58 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:32.009 14:02:58 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:32.009 14:02:58 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:32.009 14:02:58 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:32.009 14:02:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.269 14:02:58 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:32.269 14:02:58 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:32.269 14:02:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:32.269 14:02:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:32.269 14:02:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:32.269 14:02:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:32.269 14:02:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.269 14:02:58 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:32.269 14:02:58 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:32.269 14:02:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:32.269 14:02:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:32.269 14:02:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:32.269 14:02:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:32.269 14:02:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.529 14:02:58 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:32.529 14:02:58 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:32.529 14:02:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:32.529 14:02:58 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:32.790 14:02:59 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:32.790 14:02:59 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:32.790 14:02:59 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.2jqtJU4tAn /tmp/tmp.eetRVS4fB9 00:32:32.790 14:02:59 keyring_file -- keyring/file.sh@20 -- # killprocess 1325818 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1325818 ']' 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1325818 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1325818 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1325818' 00:32:32.790 killing process with pid 1325818 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@967 -- # kill 1325818 00:32:32.790 Received shutdown signal, test time was about 1.000000 seconds 00:32:32.790 00:32:32.790 Latency(us) 00:32:32.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.790 =================================================================================================================== 00:32:32.790 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@972 -- # wait 1325818 00:32:32.790 14:02:59 keyring_file -- keyring/file.sh@21 -- # killprocess 1323994 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1323994 ']' 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1323994 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1323994 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1323994' 00:32:32.790 killing process with pid 1323994 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@967 -- # kill 1323994 00:32:32.790 [2024-07-15 14:02:59.275531] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:32.790 14:02:59 keyring_file -- common/autotest_common.sh@972 -- # wait 1323994 00:32:33.050 00:32:33.050 real 0m10.881s 00:32:33.050 user 0m25.422s 00:32:33.050 sys 0m2.588s 00:32:33.050 14:02:59 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:33.050 14:02:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:33.050 ************************************ 00:32:33.050 END TEST keyring_file 00:32:33.050 ************************************ 00:32:33.050 14:02:59 -- common/autotest_common.sh@1142 -- # return 0 00:32:33.050 14:02:59 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:32:33.050 14:02:59 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:33.050 14:02:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:33.050 14:02:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:33.050 14:02:59 -- common/autotest_common.sh@10 -- # set +x 00:32:33.050 ************************************ 00:32:33.050 START TEST keyring_linux 00:32:33.050 ************************************ 00:32:33.050 14:02:59 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:33.311 * Looking for test storage... 00:32:33.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:33.311 14:02:59 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:33.311 14:02:59 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:33.311 14:02:59 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:33.311 14:02:59 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:33.311 14:02:59 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:33.311 14:02:59 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.311 14:02:59 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.311 14:02:59 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.311 14:02:59 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:33.311 14:02:59 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:33.311 14:02:59 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:33.311 14:02:59 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:33.311 14:02:59 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:33.311 14:02:59 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:33.311 14:02:59 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:33.311 14:02:59 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:33.311 14:02:59 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:33.311 14:02:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:33.311 14:02:59 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:33.311 14:02:59 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:33.311 14:02:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:33.311 14:02:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:33.311 14:02:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:33.311 14:02:59 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:33.311 14:02:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:33.311 14:02:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:33.311 /tmp/:spdk-test:key0 00:32:33.311 14:02:59 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:33.312 14:02:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:33.312 14:02:59 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:33.312 14:02:59 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:33.312 14:02:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:33.312 14:02:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:33.312 14:02:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:33.312 14:02:59 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:33.312 14:02:59 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:33.312 14:02:59 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:33.312 14:02:59 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:33.312 14:02:59 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:33.312 14:02:59 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:33.312 14:02:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:33.312 14:02:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:33.312 /tmp/:spdk-test:key1 00:32:33.312 14:02:59 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1326275 00:32:33.312 14:02:59 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1326275 00:32:33.312 14:02:59 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:33.312 14:02:59 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1326275 ']' 00:32:33.312 14:02:59 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.312 14:02:59 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:33.312 14:02:59 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.312 14:02:59 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:33.312 14:02:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:33.312 [2024-07-15 14:02:59.837042] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:32:33.572 [2024-07-15 14:02:59.837097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326275 ] 00:32:33.572 EAL: No free 2048 kB hugepages reported on node 1 00:32:33.572 [2024-07-15 14:02:59.896935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.572 [2024-07-15 14:02:59.962519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.143 14:03:00 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:34.143 14:03:00 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:34.143 14:03:00 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:34.143 14:03:00 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.143 14:03:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:34.143 [2024-07-15 14:03:00.610090] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:34.143 null0 00:32:34.143 [2024-07-15 14:03:00.642141] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:34.143 [2024-07-15 14:03:00.642525] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:34.143 14:03:00 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.143 14:03:00 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:34.143 846811339 00:32:34.143 14:03:00 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:34.404 484191282 00:32:34.404 14:03:00 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1326613 00:32:34.404 14:03:00 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:34.404 14:03:00 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1326613 /var/tmp/bperf.sock 00:32:34.404 14:03:00 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1326613 ']' 00:32:34.404 14:03:00 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:34.404 14:03:00 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:34.404 14:03:00 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:34.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:34.404 14:03:00 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:34.404 14:03:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:34.404 [2024-07-15 14:03:00.717732] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:32:34.404 [2024-07-15 14:03:00.717778] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326613 ] 00:32:34.404 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.404 [2024-07-15 14:03:00.793054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.404 [2024-07-15 14:03:00.846795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.975 14:03:01 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:34.975 14:03:01 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:34.975 14:03:01 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:34.975 14:03:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:35.236 14:03:01 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:35.236 14:03:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:35.502 14:03:01 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:35.502 14:03:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:35.502 [2024-07-15 14:03:01.949522] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:35.502 nvme0n1 00:32:35.763 14:03:02 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:35.763 14:03:02 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:35.763 14:03:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:35.763 14:03:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:35.763 14:03:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:35.763 14:03:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:35.763 14:03:02 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:35.763 14:03:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:35.763 14:03:02 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:35.763 14:03:02 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:35.763 14:03:02 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:35.763 14:03:02 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:35.763 14:03:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:36.023 14:03:02 keyring_linux -- keyring/linux.sh@25 -- # sn=846811339 00:32:36.023 14:03:02 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:36.023 14:03:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:36.023 14:03:02 keyring_linux -- keyring/linux.sh@26 -- # [[ 846811339 == \8\4\6\8\1\1\3\3\9 ]] 00:32:36.023 14:03:02 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 846811339 00:32:36.023 14:03:02 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:36.023 14:03:02 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:36.023 Running I/O for 1 seconds... 00:32:36.961 00:32:36.961 Latency(us) 00:32:36.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.961 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:36.961 nvme0n1 : 1.01 10643.81 41.58 0.00 0.00 11926.56 8246.61 20753.07 00:32:36.961 =================================================================================================================== 00:32:36.961 Total : 10643.81 41.58 0.00 0.00 11926.56 8246.61 20753.07 00:32:36.961 0 00:32:36.961 14:03:03 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:36.961 14:03:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:37.219 14:03:03 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:37.219 14:03:03 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:37.219 14:03:03 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:37.219 14:03:03 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:37.219 14:03:03 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:37.219 14:03:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:37.479 14:03:03 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:32:37.479 14:03:03 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:37.479 14:03:03 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:37.479 14:03:03 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:37.479 14:03:03 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:37.479 14:03:03 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:37.479 14:03:03 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:37.479 14:03:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:37.479 [2024-07-15 14:03:03.944463] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:37.479 [2024-07-15 14:03:03.944708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2545950 (107): Transport endpoint is not connected 00:32:37.479 [2024-07-15 14:03:03.945703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2545950 (9): Bad file descriptor 00:32:37.479 [2024-07-15 14:03:03.946705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:37.479 [2024-07-15 14:03:03.946712] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:37.479 [2024-07-15 14:03:03.946718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:37.479 request: 00:32:37.479 { 00:32:37.479 "name": "nvme0", 00:32:37.479 "trtype": "tcp", 00:32:37.479 "traddr": "127.0.0.1", 00:32:37.479 "adrfam": "ipv4", 00:32:37.479 "trsvcid": "4420", 00:32:37.479 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:37.479 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:37.479 "prchk_reftag": false, 00:32:37.479 "prchk_guard": false, 00:32:37.479 "hdgst": false, 00:32:37.479 "ddgst": false, 00:32:37.479 "psk": ":spdk-test:key1", 00:32:37.479 "method": "bdev_nvme_attach_controller", 00:32:37.479 "req_id": 1 00:32:37.479 } 00:32:37.479 Got JSON-RPC error response 00:32:37.479 response: 00:32:37.479 { 00:32:37.479 "code": -5, 00:32:37.479 "message": "Input/output error" 00:32:37.479 } 00:32:37.479 14:03:03 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:32:37.479 14:03:03 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:37.479 14:03:03 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:37.479 14:03:03 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@33 -- # sn=846811339 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 846811339 00:32:37.479 1 links removed 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@33 -- # sn=484191282 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 484191282 00:32:37.479 1 links removed 00:32:37.479 14:03:03 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1326613 00:32:37.479 14:03:03 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1326613 ']' 00:32:37.479 14:03:03 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1326613 00:32:37.479 14:03:03 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:37.479 14:03:03 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:37.479 14:03:03 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1326613 00:32:37.740 14:03:04 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:37.740 14:03:04 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:37.740 14:03:04 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1326613' 00:32:37.740 killing process with pid 1326613 00:32:37.740 14:03:04 keyring_linux -- common/autotest_common.sh@967 -- # kill 1326613 00:32:37.740 Received shutdown signal, test time was about 1.000000 seconds 00:32:37.740 00:32:37.740 Latency(us) 00:32:37.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.740 =================================================================================================================== 00:32:37.740 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:37.740 14:03:04 keyring_linux -- common/autotest_common.sh@972 -- # wait 1326613 00:32:37.740 14:03:04 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1326275 00:32:37.740 14:03:04 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1326275 ']' 00:32:37.740 14:03:04 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1326275 00:32:37.740 14:03:04 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:37.740 14:03:04 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:37.740 14:03:04 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1326275 00:32:37.740 14:03:04 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:37.740 14:03:04 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:37.740 14:03:04 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1326275' 00:32:37.740 killing process with pid 1326275 00:32:37.740 14:03:04 keyring_linux -- common/autotest_common.sh@967 -- # kill 1326275 00:32:37.740 14:03:04 keyring_linux -- common/autotest_common.sh@972 -- # wait 1326275 00:32:38.001 00:32:38.001 real 0m4.847s 00:32:38.001 user 0m8.197s 00:32:38.001 sys 0m1.366s 00:32:38.001 14:03:04 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:38.001 14:03:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:38.001 ************************************ 00:32:38.001 END TEST keyring_linux 00:32:38.001 ************************************ 00:32:38.001 14:03:04 -- common/autotest_common.sh@1142 -- # return 0 00:32:38.001 14:03:04 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:38.001 14:03:04 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:38.001 14:03:04 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:38.001 14:03:04 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:32:38.001 14:03:04 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:32:38.001 14:03:04 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:38.001 14:03:04 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:38.001 14:03:04 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:38.001 14:03:04 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:38.001 14:03:04 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:38.001 14:03:04 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:38.001 14:03:04 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:38.001 14:03:04 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:38.001 14:03:04 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:38.001 14:03:04 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:38.001 14:03:04 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:38.001 14:03:04 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:38.001 14:03:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:38.001 14:03:04 -- common/autotest_common.sh@10 -- # set +x 00:32:38.001 14:03:04 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:38.001 14:03:04 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:38.001 14:03:04 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:38.001 14:03:04 -- common/autotest_common.sh@10 -- # set +x 00:32:46.140 INFO: APP EXITING 00:32:46.140 INFO: killing all VMs 00:32:46.140 INFO: killing vhost app 00:32:46.140 WARN: no vhost pid file found 00:32:46.140 INFO: EXIT DONE 00:32:48.688 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:32:48.688 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:32:48.949 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:32:48.949 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:32:48.949 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:32:48.949 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:32:48.949 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:32:48.949 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:32:48.949 0000:65:00.0 (144d a80a): Already using the nvme driver 00:32:48.949 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:32:48.949 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:32:48.949 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:32:48.949 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:32:49.209 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:32:49.209 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:32:49.209 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:32:49.209 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:32:52.513 Cleaning 00:32:52.513 Removing: /var/run/dpdk/spdk0/config 00:32:52.513 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:52.513 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:52.513 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:52.513 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:52.513 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:52.513 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:52.513 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:52.513 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:52.513 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:52.513 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:52.513 Removing: /var/run/dpdk/spdk1/config 00:32:52.513 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:52.513 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:52.513 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:52.513 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:52.513 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:52.513 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:52.513 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:52.513 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:52.513 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:52.513 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:52.513 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:52.513 Removing: /var/run/dpdk/spdk2/config 00:32:52.513 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:52.513 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:52.513 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:52.513 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:52.513 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:52.513 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:52.513 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:52.513 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:52.513 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:52.513 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:52.513 Removing: /var/run/dpdk/spdk3/config 00:32:52.513 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:52.513 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:52.513 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:52.513 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:52.513 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:52.513 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:52.513 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:52.513 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:52.513 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:52.513 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:52.513 Removing: /var/run/dpdk/spdk4/config 00:32:52.513 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:52.513 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:52.513 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:52.513 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:52.513 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:52.513 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:52.513 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:52.513 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:52.513 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:52.513 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:52.513 Removing: /dev/shm/bdev_svc_trace.1 00:32:52.513 Removing: /dev/shm/nvmf_trace.0 00:32:52.513 Removing: /dev/shm/spdk_tgt_trace.pid868072 00:32:52.513 Removing: /var/run/dpdk/spdk0 00:32:52.513 Removing: /var/run/dpdk/spdk1 00:32:52.513 Removing: /var/run/dpdk/spdk2 00:32:52.513 Removing: /var/run/dpdk/spdk3 00:32:52.513 Removing: /var/run/dpdk/spdk4 00:32:52.513 Removing: /var/run/dpdk/spdk_pid1000178 00:32:52.513 Removing: /var/run/dpdk/spdk_pid1001185 00:32:52.513 Removing: /var/run/dpdk/spdk_pid1001861 00:32:52.513 Removing: /var/run/dpdk/spdk_pid1001866 00:32:52.513 Removing: /var/run/dpdk/spdk_pid1002198 00:32:52.513 Removing: /var/run/dpdk/spdk_pid1003631 00:32:52.513 Removing: /var/run/dpdk/spdk_pid1004879 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1015054 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1015405 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1020444 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1027512 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1030826 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1043169 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1053915 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1055971 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1056985 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1077256 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1082040 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1113957 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1119346 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1121348 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1123548 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1123817 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1124391 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1124694 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1125351 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1127486 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1128479 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1129054 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1131518 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1132245 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1133175 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1137989 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1150188 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1155045 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1162182 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1163734 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1165262 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1170441 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1175935 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1184776 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1184882 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1189746 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1190066 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1190398 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1190808 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1190918 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1196433 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1196966 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1202272 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1205458 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1211922 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1218596 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1228373 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1237494 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1237496 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1259955 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1260638 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1261327 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1262115 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1263076 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1263890 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1264691 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1265435 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1270504 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1270837 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1277868 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1278174 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1281326 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1288433 00:32:52.774 Removing: /var/run/dpdk/spdk_pid1288438 00:32:53.035 Removing: /var/run/dpdk/spdk_pid1294301 00:32:53.035 Removing: /var/run/dpdk/spdk_pid1296696 00:32:53.035 Removing: /var/run/dpdk/spdk_pid1299006 00:32:53.035 Removing: /var/run/dpdk/spdk_pid1300489 00:32:53.035 Removing: /var/run/dpdk/spdk_pid1302824 00:32:53.035 Removing: /var/run/dpdk/spdk_pid1304246 00:32:53.035 Removing: /var/run/dpdk/spdk_pid1314174 00:32:53.035 Removing: /var/run/dpdk/spdk_pid1314840 00:32:53.035 Removing: /var/run/dpdk/spdk_pid1315440 00:32:53.035 Removing: /var/run/dpdk/spdk_pid1318173 00:32:53.035 Removing: /var/run/dpdk/spdk_pid1318791 00:32:53.035 Removing: /var/run/dpdk/spdk_pid1319461 00:32:53.035 Removing: /var/run/dpdk/spdk_pid1323994 00:32:53.035 Removing: /var/run/dpdk/spdk_pid1324326 00:32:53.035 Removing: /var/run/dpdk/spdk_pid1325818 00:32:53.035 Removing: /var/run/dpdk/spdk_pid1326275 00:32:53.035 Removing: /var/run/dpdk/spdk_pid1326613 00:32:53.035 Removing: /var/run/dpdk/spdk_pid866494 00:32:53.035 Removing: /var/run/dpdk/spdk_pid868072 00:32:53.035 Removing: /var/run/dpdk/spdk_pid868601 00:32:53.035 Removing: /var/run/dpdk/spdk_pid869783 00:32:53.035 Removing: /var/run/dpdk/spdk_pid869977 00:32:53.035 Removing: /var/run/dpdk/spdk_pid871219 00:32:53.035 Removing: /var/run/dpdk/spdk_pid871373 00:32:53.035 Removing: /var/run/dpdk/spdk_pid871734 00:32:53.035 Removing: /var/run/dpdk/spdk_pid872732 00:32:53.035 Removing: /var/run/dpdk/spdk_pid873503 00:32:53.035 Removing: /var/run/dpdk/spdk_pid873839 00:32:53.035 Removing: /var/run/dpdk/spdk_pid874123 00:32:53.035 Removing: /var/run/dpdk/spdk_pid874602 00:32:53.035 Removing: /var/run/dpdk/spdk_pid875134 00:32:53.035 Removing: /var/run/dpdk/spdk_pid875570 00:32:53.035 Removing: /var/run/dpdk/spdk_pid875918 00:32:53.035 Removing: /var/run/dpdk/spdk_pid876175 00:32:53.035 Removing: /var/run/dpdk/spdk_pid877368 00:32:53.035 Removing: /var/run/dpdk/spdk_pid880615 00:32:53.035 Removing: /var/run/dpdk/spdk_pid880977 00:32:53.035 Removing: /var/run/dpdk/spdk_pid881348 00:32:53.035 Removing: /var/run/dpdk/spdk_pid881388 00:32:53.035 Removing: /var/run/dpdk/spdk_pid881961 00:32:53.035 Removing: /var/run/dpdk/spdk_pid882068 00:32:53.035 Removing: /var/run/dpdk/spdk_pid882462 00:32:53.035 Removing: /var/run/dpdk/spdk_pid882776 00:32:53.035 Removing: /var/run/dpdk/spdk_pid883035 00:32:53.035 Removing: /var/run/dpdk/spdk_pid883155 00:32:53.035 Removing: /var/run/dpdk/spdk_pid883492 00:32:53.035 Removing: /var/run/dpdk/spdk_pid883525 00:32:53.035 Removing: /var/run/dpdk/spdk_pid883980 00:32:53.035 Removing: /var/run/dpdk/spdk_pid884321 00:32:53.035 Removing: /var/run/dpdk/spdk_pid884711 00:32:53.035 Removing: /var/run/dpdk/spdk_pid885065 00:32:53.035 Removing: /var/run/dpdk/spdk_pid885104 00:32:53.035 Removing: /var/run/dpdk/spdk_pid885166 00:32:53.035 Removing: /var/run/dpdk/spdk_pid885521 00:32:53.035 Removing: /var/run/dpdk/spdk_pid885868 00:32:53.035 Removing: /var/run/dpdk/spdk_pid886207 00:32:53.035 Removing: /var/run/dpdk/spdk_pid886384 00:32:53.035 Removing: /var/run/dpdk/spdk_pid886609 00:32:53.035 Removing: /var/run/dpdk/spdk_pid886958 00:32:53.035 Removing: /var/run/dpdk/spdk_pid887314 00:32:53.035 Removing: /var/run/dpdk/spdk_pid887663 00:32:53.035 Removing: /var/run/dpdk/spdk_pid887882 00:32:53.035 Removing: /var/run/dpdk/spdk_pid888083 00:32:53.035 Removing: /var/run/dpdk/spdk_pid888402 00:32:53.296 Removing: /var/run/dpdk/spdk_pid888751 00:32:53.296 Removing: /var/run/dpdk/spdk_pid889104 00:32:53.296 Removing: /var/run/dpdk/spdk_pid889379 00:32:53.296 Removing: /var/run/dpdk/spdk_pid889572 00:32:53.296 Removing: /var/run/dpdk/spdk_pid889845 00:32:53.296 Removing: /var/run/dpdk/spdk_pid890195 00:32:53.296 Removing: /var/run/dpdk/spdk_pid890548 00:32:53.296 Removing: /var/run/dpdk/spdk_pid890884 00:32:53.296 Removing: /var/run/dpdk/spdk_pid891095 00:32:53.296 Removing: /var/run/dpdk/spdk_pid891331 00:32:53.296 Removing: /var/run/dpdk/spdk_pid891733 00:32:53.296 Removing: /var/run/dpdk/spdk_pid896055 00:32:53.296 Removing: /var/run/dpdk/spdk_pid949690 00:32:53.296 Removing: /var/run/dpdk/spdk_pid954727 00:32:53.296 Removing: /var/run/dpdk/spdk_pid966537 00:32:53.296 Removing: /var/run/dpdk/spdk_pid972866 00:32:53.296 Removing: /var/run/dpdk/spdk_pid977934 00:32:53.296 Removing: /var/run/dpdk/spdk_pid978626 00:32:53.296 Removing: /var/run/dpdk/spdk_pid986448 00:32:53.296 Removing: /var/run/dpdk/spdk_pid993666 00:32:53.296 Removing: /var/run/dpdk/spdk_pid993758 00:32:53.296 Removing: /var/run/dpdk/spdk_pid994805 00:32:53.296 Removing: /var/run/dpdk/spdk_pid995811 00:32:53.296 Removing: /var/run/dpdk/spdk_pid996814 00:32:53.296 Removing: /var/run/dpdk/spdk_pid997495 00:32:53.296 Removing: /var/run/dpdk/spdk_pid997603 00:32:53.296 Removing: /var/run/dpdk/spdk_pid997841 00:32:53.296 Removing: /var/run/dpdk/spdk_pid998087 00:32:53.296 Removing: /var/run/dpdk/spdk_pid998171 00:32:53.296 Removing: /var/run/dpdk/spdk_pid999177 00:32:53.296 Clean 00:32:53.296 14:03:19 -- common/autotest_common.sh@1451 -- # return 0 00:32:53.296 14:03:19 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:53.296 14:03:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:53.296 14:03:19 -- common/autotest_common.sh@10 -- # set +x 00:32:53.296 14:03:19 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:53.296 14:03:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:53.296 14:03:19 -- common/autotest_common.sh@10 -- # set +x 00:32:53.557 14:03:19 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:53.557 14:03:19 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:53.557 14:03:19 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:53.557 14:03:19 -- spdk/autotest.sh@391 -- # hash lcov 00:32:53.557 14:03:19 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:53.557 14:03:19 -- spdk/autotest.sh@393 -- # hostname 00:32:53.557 14:03:19 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:53.557 geninfo: WARNING: invalid characters removed from testname! 00:33:20.135 14:03:44 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:20.394 14:03:46 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:22.938 14:03:48 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:24.317 14:03:50 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:25.698 14:03:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:27.610 14:03:53 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:28.996 14:03:55 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:28.996 14:03:55 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:28.996 14:03:55 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:28.996 14:03:55 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:28.996 14:03:55 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:28.996 14:03:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.997 14:03:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.997 14:03:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.997 14:03:55 -- paths/export.sh@5 -- $ export PATH 00:33:28.997 14:03:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.997 14:03:55 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:28.997 14:03:55 -- common/autobuild_common.sh@444 -- $ date +%s 00:33:28.997 14:03:55 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721045035.XXXXXX 00:33:28.997 14:03:55 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721045035.BSvHlI 00:33:28.997 14:03:55 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:33:28.997 14:03:55 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:33:28.997 14:03:55 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:28.997 14:03:55 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:28.997 14:03:55 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:28.997 14:03:55 -- common/autobuild_common.sh@460 -- $ get_config_params 00:33:28.997 14:03:55 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:28.997 14:03:55 -- common/autotest_common.sh@10 -- $ set +x 00:33:28.997 14:03:55 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:28.997 14:03:55 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:33:28.997 14:03:55 -- pm/common@17 -- $ local monitor 00:33:28.997 14:03:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:28.997 14:03:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:28.997 14:03:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:28.997 14:03:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:28.997 14:03:55 -- pm/common@21 -- $ date +%s 00:33:28.997 14:03:55 -- pm/common@25 -- $ sleep 1 00:33:28.997 14:03:55 -- pm/common@21 -- $ date +%s 00:33:28.997 14:03:55 -- pm/common@21 -- $ date +%s 00:33:28.997 14:03:55 -- pm/common@21 -- $ date +%s 00:33:28.997 14:03:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721045035 00:33:28.997 14:03:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721045035 00:33:28.997 14:03:55 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721045035 00:33:28.997 14:03:55 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721045035 00:33:28.997 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721045035_collect-vmstat.pm.log 00:33:28.997 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721045035_collect-cpu-load.pm.log 00:33:28.997 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721045035_collect-cpu-temp.pm.log 00:33:28.997 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721045035_collect-bmc-pm.bmc.pm.log 00:33:29.995 14:03:56 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:33:29.995 14:03:56 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:33:29.995 14:03:56 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:29.995 14:03:56 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:29.995 14:03:56 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:29.995 14:03:56 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:29.995 14:03:56 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:29.995 14:03:56 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:29.995 14:03:56 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:29.995 14:03:56 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:29.995 14:03:56 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:29.995 14:03:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:29.995 14:03:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:29.995 14:03:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:29.995 14:03:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:29.995 14:03:56 -- pm/common@44 -- $ pid=1339272 00:33:29.995 14:03:56 -- pm/common@50 -- $ kill -TERM 1339272 00:33:29.995 14:03:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:29.995 14:03:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:29.995 14:03:56 -- pm/common@44 -- $ pid=1339273 00:33:29.995 14:03:56 -- pm/common@50 -- $ kill -TERM 1339273 00:33:29.995 14:03:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:29.995 14:03:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:29.995 14:03:56 -- pm/common@44 -- $ pid=1339275 00:33:29.995 14:03:56 -- pm/common@50 -- $ kill -TERM 1339275 00:33:29.995 14:03:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:29.995 14:03:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:29.995 14:03:56 -- pm/common@44 -- $ pid=1339298 00:33:29.995 14:03:56 -- pm/common@50 -- $ sudo -E kill -TERM 1339298 00:33:30.255 + [[ -n 746715 ]] 00:33:30.255 + sudo kill 746715 00:33:30.265 [Pipeline] } 00:33:30.282 [Pipeline] // stage 00:33:30.287 [Pipeline] } 00:33:30.303 [Pipeline] // timeout 00:33:30.308 [Pipeline] } 00:33:30.325 [Pipeline] // catchError 00:33:30.330 [Pipeline] } 00:33:30.350 [Pipeline] // wrap 00:33:30.356 [Pipeline] } 00:33:30.372 [Pipeline] // catchError 00:33:30.379 [Pipeline] stage 00:33:30.381 [Pipeline] { (Epilogue) 00:33:30.395 [Pipeline] catchError 00:33:30.397 [Pipeline] { 00:33:30.411 [Pipeline] echo 00:33:30.412 Cleanup processes 00:33:30.418 [Pipeline] sh 00:33:30.708 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:30.709 1339368 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:30.709 1339823 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:30.723 [Pipeline] sh 00:33:31.009 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:31.009 ++ grep -v 'sudo pgrep' 00:33:31.009 ++ awk '{print $1}' 00:33:31.009 + sudo kill -9 1339368 00:33:31.023 [Pipeline] sh 00:33:31.311 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:43.547 [Pipeline] sh 00:33:43.832 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:43.832 Artifacts sizes are good 00:33:43.849 [Pipeline] archiveArtifacts 00:33:43.858 Archiving artifacts 00:33:44.048 [Pipeline] sh 00:33:44.332 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:44.347 [Pipeline] cleanWs 00:33:44.356 [WS-CLEANUP] Deleting project workspace... 00:33:44.356 [WS-CLEANUP] Deferred wipeout is used... 00:33:44.362 [WS-CLEANUP] done 00:33:44.365 [Pipeline] } 00:33:44.387 [Pipeline] // catchError 00:33:44.398 [Pipeline] sh 00:33:44.682 + logger -p user.info -t JENKINS-CI 00:33:44.693 [Pipeline] } 00:33:44.707 [Pipeline] // stage 00:33:44.712 [Pipeline] } 00:33:44.729 [Pipeline] // node 00:33:44.735 [Pipeline] End of Pipeline 00:33:44.767 Finished: SUCCESS